use faux pas (killtacular) wrote,
use faux pas

Objectivity in Subjective Bayesianism

*You really probably should skip this, its very long and unlikely to be of interest to any people who aren't into Bayesianism, and even maybe only some of them; I just kinda wanted to write it up*

So, I've been thinking recently about the condition of regularity (or strict coherence) in subjective Bayesianism. This is the requirement that you only assign probability strictly between 0 and 1 for all non-logical truths (or something similar). If you do this, I think you get a nifty resolution to (at least one version of) the problem of old evidence. I've been looking for other applications.

One that seemed promising was in the martingale convergence results for subjective Bayesianism. Basically, they state that as long as two different Bayesian agents agree on their assignments of 0s (and so also 1s) to propositions , when presented with the same evidence in the long run they will achieve a merger of opinion about a certain substantial class of propositions - that is, you will get agreement (in fact, the results are considerably stronger: both will converge to certainty on the true hypothesis). Since intersubjective agreement is the foundation of objectivity in the (epistemology of the) sciences, this seems to guarantee a measure of (eventual) objectivity for Subjective Bayesianism.

Now, if you impose regularity, you automatically get the required condition for this result! So that is awesome! A principled reason why different agent's 0s and 1s should agree. Or so I thought. Unfortunately, the convergence theorems only appeal to agents who update via strict conditionalization. Since this is a process that involves becoming certain (i.e., assigning a probability of 1) to various propositions, appealing to regularity to guarantee the initial agreement of 0s and 1s is, at the least, pretty counterproductive.

So then I thought, well, maybe we could walk down from the canonical convergence theorems (which, after all, guarantee convergence to certainty, not merely merger of opinion) to just get a merger of opinion result for Jeffrey Conditionalization (JC), with the same stipulation of agreement about the initial assignment of 0s and 1s. Unfortunately, two problems presented themselves. The first is that the canonical presentation of the proofs of the convergence theorems (Gaifman and Sniff) is, well, pretty hard, so it was difficult to figure out how (if at all) they could be generalized. But more importantly, I think such a result may well be impossible.

The reason is, to get any sort of merger of opinion result, I think you would have to impose a condition along the lines of: "after receiving some evidence, both agents assign it a probability of (at least?) x," and the resulting merger of opinion result would guarantee long term convergence as a function of "x" (i.e., they would be within f(x) of each other). The problem is, no such condition can be applied for JC, on pain of granting the "non-commutativity" of JC. Basically, JC is formally non-commutative in the sense that the order in which evidence is presented to you (and so updated on via JC) matters in that if you assign a probability of x to e1 and y to e2 for two propositions that are relevant to something hypothesis "h", then updating first on e1 and then on e2 leaves you with a different final probability for h than if you had first updated on e2 and then on e1. This problem can be easily overcome. But it requires that the same experience can result in different probability assignments for the same evidence propositions. This is quite justified: your background beliefs, and your credence in the hypothesis in question, should play a role in the probabilities that result from your experience. But, as a result, the same experience for two different agents can never guarantee the same resulting updated probability. So, the condition that it seems would be required for any merger of opinion result cannot obtain without sacrificing commutativity.

So, I started thinking about other ways of thinking about objectivity for subjective Bayesianism that didn't rely on the convergence theorems. I think the answer is found in the "least change" results for JC. Basically, updating by JC results in the minimal change in your probability assignments that you can make while remaining coherent. I think this is where the true Objectivity for Subjective Bayesianism is to be found. The reason is that Bayesianism is a theory of procedural rationality. Given your credences, it tells you how to respond to new evidence. JC represents the least change that you can make in your beliefs when your change your mind about some evidence or other proposition. Any further change, then, would represent an unwarrantedly subjective response to that evidence for which there is, demonstrably, no reason at all. The subjectivity of subjective Bayesianism should be located entirely in the priors. Once those are determined, the only (well, more or less) way to objectively respond to learning new things is to us JC. Subjective Bayesianism (with regularity) thus finds its objectivity not in substantive merger of opinion results, but in the procedural "elimination of arational/unwarranted additional changes" that result from the "least change" theorems. The appropriate place for subjectivity is in assigning priors, not in responding to evidence by changing probabilities that the evidence has no bearing on. The former is the core of substantive Bayesianism. The latter is arational personalism. But, I've obviously yet to really formulate this "argument" into anything really convincing.
  • Post a new comment


    default userpic

    Your IP address will be recorded