use faux pas (killtacular) wrote,
use faux pas

Against Imprecise Probabilities #2

So, advocates of imprecise probabilities say 1) that in the complete absence of evidence about some proposition, we should assign a set of probabilities covering [0,1] to that proposition, and 2) that we should update our probabilities upon learning some evidence by conditionalizing upon each member of our set of probabilities.

But if you think both of these claims are true, you get disaster. First, if you have a (convex) set of probabilities varying from [0,1], then guess what, you automatically get a set of [0,1] as the result of conditionalizing upon any piece of evidence whatsoever. But it gets worse! Suppose you sympathize with some sort of strict coherence or regularity condition, and say that total ignorance should be modeled by a set of probabilities between (0,1). Again, though, any update by conditionalization will result in (0,1). This is because for any 0<x<1, P(H|E)=x for some suitable prior probability of P(H) given some fixed P(E) and P(E|H) (if you don't fix those, it is even easier to get P(H|E)=x). Confirming evidence will move each individual probability in your set higher, but for each value of x, some other function will "swooping in" from the left (so to speak) and settle on x. And vice versa for disconfirming evidence. Upshot: ignorance cannot be rectified via learning on the standard model of imprecise probability (well, it can, but only in a very unmotivated and arbitrary way). But that is absurd, and disastrous epistemology.
  • Post a new comment


    default userpic

    Your IP address will be recorded