Renata Wassermann on Belief Revision in DLs

I used some of Renata’s work in my thesis and we’ve corresponded on and off. One of her students is visiting us and she came and gave a talk! It was very nice.

One interesting bit was that did some experiments on partial meet vs kernel based revision and found “contrary to computer science intuition” partial meet generally is more efficient. Ok that’s a lot of jargon here’s an attempt to sort it succinctly.

Given a set of beliefs, B, (think propositional sentence ie things which can be true or false), and some sentence S which follows from B, how can we shrink B so S no longer follows? This isn’t easy! S may not be a member of B. S might be entailed by lots of different parts of B.

One approach is to find all the minimal subsets of B which entail S. Since they are minimal, we can break the entailment by deleting just one element. If we fix each subset then we have a fix for B. These subsets are called kernels (or justifications). They correspond nicely to typical debugging approaches.

Alternatively, we could try to build a maximal subset of B which doesn’t entail S. There will be many such subsets but obviously each does the job. Call such a set a remainder. We can just pick one remainder, or take the intersection of several (or all). If we take fewer than all we have partial meet contraction.

Now Renata said something that didn’t make sense to me ie that the reason kernal contraction has been preferred is that computer scientists think it’s more efficient because “kernels are smaller”. But…I’ve never heard that. The concepts are dual but kernels are easier for humans to deal will. They capture the logic of how the undesired entailment works. It never occurred to me to even ask which approach is more efficient. It depends on the nature of the sets!

One interesting bit is that a difference between debugging and revision folks is that debugging folks usually consider minimal repairs, ie, selections from the set of justifications that contain no repairs. This corresponds to full meet contraction which has a number of issues. If you go for partial meet then you have to do a bit of work to get an algorithm that finds desirable contractions compared to the remainder based approach.

Of course, even from a debugging perspective a partial meet approach might make sense. When you figure out a bug, you might make more changes than just the minimum one to fix the focus broken test. After all, you might get an insight about a particular function call and change how you call it everywhere. You might realise that a module is just irredeemably broken and replace it entirely.

The Viz Fallacy

Taxonomies are fun and usually treeshaped. Trees are “easy” to visualize. When people have a taxonomy they often like to visualize them.

But the results aren’t always so nice. Given that this is a taxonomies of fallacies, I found their misuse of a visualization highly amusing. It is also reminiscent of one of my thesis topics (non logical reductios).

mc et al called this the pathetic fallacy (of RDF).

Experiments vs. Case Studies

My recent post on validities was motivated by John Proveti posting a draft of an abstract he was submitting about the Salaita affair. John focused on exploring the use of case studies in moral analysis. This prompts me to write up (again) my spiel on experiments and case studies.

The primary aim of a controlled experiment is internal validity, that is, demonstrating causal relationships. The primary tool for this is isolation, that is, we try to remove as much as possible so that any correlations we see are more likely to be causal. If you manipulate variable v1 and variable v2 responds systematically and there are no other factors that change through the manipulation then you have a case that changes in v1 cause those changes in v2. (Lots of caveats. You want to repeat it to rule out spontaneous changes to v2. Etc.) Of course, you have lots of problems holding everything except v1 and v2 fixed. It’s probably impossible in almost all cases. You may not know all the factors in play! This is especially true when it comes to people. So, you control as much as you can and us a large number of randomly selected participants to smooth out the unknowns (roughly). But critically, you shrink the v and up the n (i.e., repetitions).

Low v tends to hurt both external and ecological validity. In other circumstances, other factors might produce the changes in v2 (or block them!). For other controlled circumstances, this might be fairly easy to find the interaction. But for field circumstances, the number of factors potentially in play explodes.

Thus, the case study, where we lower the number of n (to n=1) in order to explore arbitrary numbers of factors. Of course, the price we pay for that is weakening internal and external validity, indeed, any sort of generalisability.

Of course, in non-experimental philosophy, the main form of experiment is the thought experiment. But you can see the experiment philosophy at work: The reason philosopher dream up outlandish circumstances is to isolate and amplify the target v1 and v2. Thus, in the trolly problem, you have a simple choice. No one else is involved, and we pit number of lives vs. omission or commission and the result is death. That the example is hard to relate to is a perfect example of a failure of ecological validity. But philosophers get so used to intuiting under though laboratory conditions that they become a bit like mice who have been bred to be susceptible to cancer: Their reactions and thinking is suspect. (That it is all so clean and clever and pure makes it seem like one is thinking better. Bad mistake!)

Of course, we can have thought case studies as well. This is roughly what I take Martha Nussbaum to claim about novels in “Flawed Crystals: James’s The Golden Bowl and Literature as Moral Philosophy“:

To show forth the force and truth of the Aristotelian claim that “the decision rests with perception,” we need, then-either side by side with a philosophical “outline” or inside it—texts which display to us the complexity, the indeterminacy, the sheer difficulty of moral choice, and which show us, as this text does concerning Maggie Verver, the childishness, the refusal of life involved in fixing everything in advance according to some system of inviolable rules. This task cannot be easily accomplished by texts which speak in universal terms—for one of the difficulties of deliberation stressed by this view is that of grasping the uniqueness of the new particular.  Nor can it easily be done by texts which speak with the hardness or plainness which moral philosophy has traditionally chosen for its style—for how can this style at all convey the way in which the “matter of the practical” appears before the agent in all of its bewildering complexity, without its morally salient features stamped on its face? And how, without conveying this, can it convey the active adventure of the deliberative intelligence, the “yearnings of thought and excursions of sympathy” (p. 521) that make up much of our actual moral life?

I take this as precisely the point that more abstract explorations of moral reasoning lack ecological validity.

This, of course, has implications both for moral theorising and for moral education. Our moral theories are likely to be wrong about moral life in the field (and, I would argue, in the lab as well!). (I think this is what Bernard Williams was partly complaining about in Utilitarianism For and Against.) But further, learning how to reason well about action in in the circumstances of our lives won’t work by ingesting abstract moral theories (even if they are more or less true). We still need to cultivate moral judgement.

I think we can do philosophical case studies that are not thought case studies just as we can do experimental philosophy without thought experiments. Indeed, I recommend it.

On Validities

In an Introduction to Symbolic Logic class offered by a philosophy class, you will probably learn:

  1. An argument is valid if when the premises are all true, the conclusion is (or must be) true.
  2. An argument is sound if it is valid and the premises are all true.

In such a class with a critical reasoning component, you will also learn about various common logical fallacies, that is, arguments which people take as valid but which are not (e.g., affirming the consequent, which is basically messing up modus ponens).

You might also get some discussion of “invalid but good” arguments, namely, various inductive arguments. (Perhaps these days texts include some proper statistical reasoning.) This notion is passé. I think reserving “validity” for “deductive validity” is unhelpful. In many scientific papers, there will be a section on “threats to validity” where the authors address various issues with the evidence they provide, typically:

  1. Internal validity (the degree to which the theory, experimental design, and results support concluding that there is a causal relationship between key correlated variables)
  2. External validity (the degree to which the theory, experimental design, and results generalise to other (experimental) populations and situations)
  3. Ecological (or field) validity (the degree to which the theory, experimental design, and results generalise to “real world” conditions)

There are dozens of other sorts of validity. Indeed, the Wikipedia article presents deductive validity as restricted:

It is generally accepted that the concept of scientific validity addresses the nature of reality and as such is an epistemological and philosophical issue as well as a question of measurement. The use of the term in logic is narrower, relating to the truth of inferences made from premises.

I like the general idea that a validity of an argument is the extent to which the argument achieve what it is trying to achieve.  Typically, this is to establish the truth (or likelihood) of a conclusion. Deductions are useful, but they aren’t what you need most of the time. Indeed, per usual, establishing the truth of the premises is critical! And we usually can’t fully determine the truth of the premises! So, we need to manage lots of kinds of evidence in lots of different ways.

An argument is a relationship between evidence and a claim. The case where the relationship is deductive is wonderful and exciting and fun, but let’s not oversell it.

Is it possible to (knowingly, truly) believe a contradiction? (Part 2)

We are considering three variants of a question:

  1. Is it possible to believe a contradiction?
  2. It is possible to knowingly believe a contradiction?
  3. It is possible to truly believe a contradiction?

Each question arises from a certain way of saying “yes” to the prior question.

In part 1, I argued that we can  know we believe a contradiction in the sense that it’s highly likely for us to have contradictory beliefs:

This “yes” answer roughly says: We can believe a contradiction if we do not know we believe it.

Even this needs some care. Given how crap we all are at thinking, we can be pretty sure we have inconsistent beliefs. So, we might well know that we believe a contradiction! But knowing that we believe some contradiction doesn’t mean we know which contradiction we believe. So maybe that’s ok!

This raise the question can we knowingly believe a contradiction in the sense of knowing which contradiction it is we believe.

2. It is possible to knowingly believe a contradiction?

If it is possible to have believed a contradiction (though we didn’t know it at the time), then it’s possible to know that we have done so. Thus, it’s possible to come to know we believe a contradiction. This isn’t all that uncommon to see!

For me, the paradigm is Frege. I’m pretty sure he believed in his axioms for arithmetic. They implied Russell’s paradox. “Arithemetic totters!” (so the story goes). He came to realise there was a contradiction in his belief set. He could even name it! Russell’s paradox (the set of all sets that do not contain themselves)!

So we can have contradictory beliefs. We can come to recognise them as contradictory. So what’s the barrier to continuing to believe them? We might even say that we know that we can do so, since we were doing so all along!

But but but! Perhaps rationality demands that when we recognise that we are believing a specific contradiction that we give up one or the other contrary (or both).  This seems to reduce to the irrationality of believing a known falsehood.

It may be irrational, but so?! While it may be irrational to believe a contradiction it doesn’t seem impossible. The basic intuition is that believing is believing as true. For a “known” false contingent belief, we can sort of “forget” that it is false, or pretend that it is true (which is easy to do). But there’s no scenario in which a contradiction is true, so how can be believe it as true? On this line, if we cannot believe it as true, we cannot actually belief it.

What, then, were we doing when we had the contradictory belief set? Well, there ignorance came in. We believed things as true which were necessarily false because we didn’t know that they were necessarily false. We didn’t put together the facts of their falseness. And this is why when we come to realise the contradiction, the story goes, we can’t believe it. There’s no place to hide from the falsity, so our belief (as true) vanishes.

(Of course, with a strong will to believe we might cloud our insight into the falsity. But that’s just either denial or imposing ignorance.)

I think we need a very strong version of “knowingly” to make this work. To wit, the belief, b,

  1. must be fully occurrent (all parts are strongly “in view”)
  2. must be unmistakably contradictory (P & ~P, please)
  3. must be believed as true

If it’s not fully occurrent we can keep rapidly shifting our focus and our believing. If it’s not unmistakably contradictory, we can fail to understand it fully. If it doesn’t have to be believed as true, well, then what’s the deal? (I can easily keep a contradiction in some buffer!)

This works only if you must believe B exclusively as true. Note that it’s important that whether we can believe B as non-exclusively true (or as false) is separate from whether B can be non-exclusively true.  That is, we can have the following possibilities:

  1. (Some variant of the law of excluded middle) B, if true, cannot be false, and vice versa
  2. B can be simultaneously true and false
  3. If we believe B, we must believe it exclusively as true (i.e., we must believe it to be not false)
  4. If we believe B, we must believe it as true (but we may also believe that it is false)
  5. Believing B is independent of our attitudes toward its truth status

Now, we clearly have loads of logics where 2 holds, so we don’t have a mathematical problem. The issue, of course, is whether these capture the behavior of believing. After all, even if 2 is the right logic for things, 3 might still be right for how we believe. For example, 5 just doesn’t seem to be true of beliefs (rather than “entertainings”).  In the worst case, it pushes the problem back a little. (I.e., even if not all believings are believing as exclusively true, what happens when we try to believe something “necessarily” false as exclusively true?)

(Note that there’s a symmetry with disbelieving tautologies.)

We can think that we shouldn’t, by and large, believe falsehoods (though many falsehoods are valuable to believe because they keep us sane, or on the straight and narrow, or are generally useful in our cognitive and affective systems) while still acknowledging that we are able to believe them. The knowingly challenge suggests that the only way we can believe false things is by not (fully?) knowing that they are false. (We can handle the case of massless ropes by treating them as useful fictions; we can “believe” false things as “true in a fictional context” or the like.)

Perhaps the heart of the problem of occurrently believing as true a known-as-such contradiction is that there is no room to articulate the truth.

Consider the proposition that my left foot is colored neon pink. This is not true at the moment, but it’s easy for me to construct a situation in which it is true (i.e., it has lots of models). The proposition that by left foot is both colored neon pink and not colored neon pink has no models. If I try to construct one (even after abandoning visualisation), I fail because it has no models. We can conceive of ordinary believing as true of falsehoods as framing them in one of their models. We can then always relativize our believe as a conditional one, If we’re in situation S, then B is true. If S is a model of B, this conditional is itself true and our knowingly believing a falsehood reduces to a believing as true of a conditional.

This move is not available for us with a contradiction. There’s no backing conditionals available to us so no way to capture what truth we are believing.

However, perhaps there’s a way out. As I wrote earlier, there are logics wherein propositions can be both true and false, which suggests that there are models which are somewhat different than classical first order models. Or more importantly, if a false proposition can also be true that suggest that there is a (perhaps non-conditional) backing truth we can appeal to to substantiate our belief as true.

Is it possible to (knowingly, truly) believe a contradiction? (Part 1)

Let’s consider three variants of a question:

  1. Is it possible to believe a contradiction?
  2. It is possible to knowingly believe a contradiction?
  3. It is possible to truly believe a contradiction?

Each question arises from a certain way of saying “yes” to the prior question. I’ll work through the dialectic.

1. Is it possible to believe a contradiction?

A contradiction is a proposition or set of proposition which is “necessarily” false. It’s important to be very very careful about the formulation of “contradiction” because the standard family of notions (inconsistent, necessarily false, cannot be true, has no interpretation which makes it true, etc.) are all equivalent in a certain setting and yet can be prised apart in other settings. So, for example, in a bivalent logic, being false entails being not-true and vice versa. This isn’t true for non-bivalent logics or  logics with value overloading.

I’m going to stick with the “must be false” version because, if there’s any consensus about contradictions, it’s that they are false. (You might think they are true as well, but that’s as well. It be interesting to think about contradictions, even true contradictions, that weren’t false. But it’s also weird.)

(Assumption 1: Contradictions are false.)

All contradictions are false, but, of course, not all false proposition are contradictions. We definitely can believe all sorts of things which can be or are false. If anything is true, it true that everyone has false beliefs. So it’s possible to believe false propositions. Contradictions are false proposition, so, what’s the big deal in believing them?

(Assumption 2: We can believe false propositions.)

The usual move here is to try to connect believing with believing-as-true. Most false statements we believe have the possibility of being true. Thus, to believe something false is to be mistaken. I.e., you assign the wrong truth value to the proposition. Herein lies the problem with believing a contradiction. If we recognise that it’s a contradiction, then we must recognise that it cannot be true. Thus, presumably, that prevents it from believing it’s true (since we know its not!). Thus we can’t believe it.

(Assumption 3: We cannot believe-as-true what we know to be false.)

At this point, we have a problem in that it seems that people do believe contradictions (or at least have contradictory beliefs). However, we have an out. In the case of believing false contingent statements, the solution was our ignorance of the correct truth assignment. In the case of contradictions, perhaps we don’t know it’s a contradiction. After all, some contradictions are really hard to recognise (if Frege could miss Russell’s paradox, well, don’t feel too bad about falling into contradiction). This sometimes gets cashed out (cf. Rescher and Brandom; non-adjuctive logics generally; lots of other approaches) as we can believe inconsistent sets of propositions, but we cannot believe self-contradictions (or, perhaps, blatant self-contradictions). That is, we can have a big sea of beliefs in which we have both P and ~P but we don’t recognize that they are both in there (it’s a big sea), at least at the same time. “Of course”, the rational thing to do when the contraries come into simultaneous view is to give one up.

(This might not be easy if they follow in complicated ways from other parts of our beliefs.)

This “yes” answer roughly says: We can believe a contradiction if we do not know we believe it.

Even this needs some care. Given how crap we all are at thinking, we can be pretty sure we have inconsistent beliefs. So, we might well know that we believe a contradiction! But knowing that we believe some contradiction doesn’t mean we know which contradiction we believe. So maybe that’s ok!

This now yields the second question: Can we knowingly believe a contradiction?