«1 veiled disagreement VEILED DISAGREEMENT * H ow I should weigh my disagreement with you depends at least in part on how reliable I take you to be. ...»
If, however, we somehow could form reasonable expectations about an agent’s reliability in some hyper-specific scenario, then it would be rational to follow those expectations in adjusting our own beliefs. This shows that the difference with reliabilism is not just a matter of being able to resist being pulled down to the limiting case of a single instance, but more fundamentally a matter of the clash between externalist and internalist approaches. It is, so far as I have found, universally accepted in the peer-disagreement literature that the appropriate approach is internalistic—how we ought to adjust our beliefs in the face of disagreement is a function of what information we have access to. I relegate these remarks to a footnote because they raise issues I mean to take for granted in what follows.
Master Proof JOP 5876 the journal of philosophy
ii. neglected evidence Solving the Generality Problem for peer disagreement allows us to solve two of the most prominent objections to the Impartiality thesis.
I consider first the objection from neglected evidence and then, in the following section, the objection from absurd disagreements.
The neglected-evidence objection begins by observing that, if we both possess the same information and yet disagree, then Impartiality demands that our disagreement get decided by weighing my credence against yours, and my reliability against yours. If these are equal and opposite, then we each should arrive at a credence of
0.5. What looks objectionable is that we would seemingly have arrived at this result without considering the evidence on which we base our respective opinions.6 This way of putting the objection is, however, not entirely apt.
After all, if my evidence that p really were to drop out of the picture, then I would lose any reason to believe p, and then all I would have to go on is your belief that ∼p. Since I take you to be just as trustworthy as me, it would then be rational for me to follow you and believe ∼p. We would have a straightforward case of testimony, not disagreement. The reason I go to 0.5, rather than embrace your view, is that I am still paying attention to my evidence. There is, however, a better way to formulate the neglected-evidence objection. For although Impartiality does not ignore the evidence, it does seem to ignore the evidence’s strength. If I determine that you and I are locked in genuine peer disagreement, then Impartiality takes me right to 0.5, regardless of how strong or weak my evidence looks to be. Equal credences and equal reliability between peers automatically yields stalemate, no matter what the evidence. That cannot be right.
Roger White has developed this line of objection in some detail, and it will be worthwhile to work through some of the details of his account. On White’s analysis, the conditional probability I should consider in a case where you and I are locked in peer disagreement over p is not the probability of p given your contrary The objection has been set out most forcefully in Thomas Kelly, “Peer Disagreement and Higher-Order Evidence,” in Richard Feldman and Ted A. Warfield, eds., Disagreement (New York: Oxford, 2010), pp. 111–74. As Kelly puts it, “With respect to playing a role in what is reasonable for us to believe at time t1, E [the evidence] gets completely swamped by purely psychological facts about what you and I believe” (p. 124).
See too David Enoch, “Not Just a Truthometer: Taking Oneself Seriously (but not Too Seriously) in Cases of Peer Disagreement,” Mind, cxix, 476 (October 2010):
953–97, at p. 969: “the Equal Weight View requires that in the face of peer disagreement we ignore our first-stage evidence altogether.”
Master Proof JOP 587 veiled disagreement
belief, but the probability of p given both your contrary belief and
the evidence:
P (p ∣ e & you believe ∼p).
(1) This seems to take into account both what it is I want to figure out (the probability of p) and what information I have to go on (the evidence and the fact of your contrary belief). Plugging this formula into Bayes’ Theorem reveals that we get the desired equal-weight outcome (that is, that the conditional probability here is ½) if and only if we assume that the probability of p given the evidence
is exactly the same as your expected reliability:
ry 5 P (p ∣ e).
(2) This is not an obvious result,7 but one can see intuitively why it makes sense as follows. First, treat an agent’s expected reliability regarding p as the conditional probability of p given that the agent believes p. Now, in the present case we are supposing that you and I are locked in peer disagreement: that is, we are in a situation where your expected reliability is equivalent to my expected reliability. Drawing these threads together with (2), we get ry 5 P (p ∣ e) 5 rm 5 P (p ∣ I believe p).
(3) Proponents of Impartiality, if they are to get the desired equal-weight outcome in cases of peer disagreement, must embrace all the equivalencies in (3). The price of not neglecting the evidence—of including e among the information on which I conditionalize in (1)—is that P (p ∣ e) gets drawn into (3), as equivalent both to my expected reliability and to your expected reliability. When and only when we do that can we get a value of ½ for (1).
One striking implication of this result is that the probability of p given the evidence is equivalent to the probability of p given that I believe p. On its face, this seems like no bad thing at all—it looks like my beliefs are simply tracking my evidence. White agrees that, so far, we have arrived at nothing more troubling for Impartiality than what “can seem like a bit of common sense.”8 But White thinks that further reflection on the situation reveals serious problems. If we are committed to the above probabilistic connection between belief and evidence, then this, according to White, imposes a general For the details of how Bayes’ Theorem yields this outcome, see White, “On Treating Oneself,” op. cit., p. 239.
Ibid.
constraint on our credences, even prior to disagreement.9 He calls
this the Calibration Rule:
If I draw the conclusion that p on the basis of any evidence e, my credence in p should equal my prior expected reliability with respect to p.10 The proponent of Impartiality is stuck with this result, inasmuch as it is embedded in (3). This is the fruit of our attempting to take count of the evidence, back in (1), but it is here, for White, that the neglectedevidence objection finally becomes vivid. The problem, as remarked earlier, is not that this Calibration Rule calls on us to ignore the evidence, but that we seem unable to respond to differences in the kind of evidence available. White considers a situation where I have very strong evidence for p but my expected reliability for p is only 70%.
The Calibration Rule seems to require that I downplay the significance of the evidence and maintain a credence in p of 0.7. The result, as White puts it, is that “the strength of the evidence—in this case, the fact that it strongly supports p—has no role to play in determining my attitude.”11 Here White is surely right: this must be wrong.
The situation, then, is that Impartiality requires a policy of proportioning one’s reliability to one’s evidence. But sometimes it seems clear that the evidence warrants greater confidence than one’s expected reliability predicts. To make this vivid, consider disagreement over an arithmetic test. You and I discuss the answers afterwards, and we find that our answers agree except for Problem #9. I regard you as my epistemic peer, so whereas I had been thinking that I aced the test, now I fear that there is a good chance I got a question wrong.
But now suppose I still have the question sheet, and I look at #9. It’s a word problem, and I pride myself as being really good at word problems. Moreover, when I look at this particular problem, the answer seems clear to me. So now it looks like I should be feeling good about my chances. No doubt I should not entirely ignore our disagreement, but surely I should not drop my credence regarding my answer to #9 down to ½, as Impartiality would seem to require. What I should do instead is to let my evidence increase my credence in my answer above my expected reliability, which will cause me to give less than equal weight to your answer. This violates the Calibration Rule.
I take for granted the usual direct relationship between probabilities and credences. For the sake of clarity and vividness, in some places I treat belief as admitting of degrees expressed in terms of credences, and in other places I treat belief as all or nothing. At the cost of some awkwardness, one could formulate my conclusions consistently in one way or the other.
White, “On Treating Oneself,” op. cit., p. 239.
Ibid., p. 240.
Master Proof JOP 587 veiled disagreement
White is quite right that Impartiality demands adhering to the Calibration Rule. But the correct moral to draw here is not that Impartiality is in trouble, but that we need to distinguish between more and less fine-grained measures of an agent’s reliability—that we need to grapple with the Generality Problem. If I am brilliant at word problems and terrible at graphing, then my overall expected arithmetical reliability may be 90%, and that may be a useful number to know, but it may also be useful to have a more fine-grained measure of expected reliability, which shows, for instance, that I am 60% reliable on graphing problems and 98% reliable on word problems.
The Calibration Rule is defensible provided we consider sufficiently fine-grained measures of reliability. Before taking the test, it is reasonable to have a credence of 0.9 in my answer to the first problem.
Once I see that it is a graphing problem, however, my heart ought to sink, and I should revise downward. How far? To 0.6, of course, no more and no less. But what about once I work through the problem?
Since I am bad at graphing, this may make me even more dispirited, lowering my credence in p still further. But if this is how I always feel when I do graphing problems, then it would be irrational to become less confident—I should stay at 0.6. Conversely, suppose that working through the problem gives me a sense of confidence in my answer. If this is how I always feel, then that confidence too should count for nothing. I should stay at 0.6. Of course, we might need to develop even more fine-grained measures of reliability to distinguish between the various degrees of confidence I might feel in light of the evidence. If one’s measure of reliability is not nuanced enough to distinguish between those cases, then of course looking at the evidence will make a difference to one’s credences. Without localized information, I will often violate the Calibration Rule, adjusting my credences upward and downward, from case to case, but in a way that will cohere in the long run with my antecedently predicted global reliability. If I manage to arrive at sufficiently localized information about my expected reliability, then I should conform my credences to the Calibration Rule.
What does this show us about peer disagreement, and the neglectedevidence objection? Return to our disagreement over Problem #9.
If this is a case of peer disagreement, then ex hypothesi our expected arithmetical reliability is the same. But suppose I now look at my question sheet and see that #9 is a word problem. My credence in my answer goes up, and I have violated the Calibration Rule relative to that initial, rough-grained measure of my reliability. But then I remember our disagreement. The question I need to ask, of course, is how good you are at word problems—I need to make a
Master Proof JOP 58710 the journal of philosophy
more fine-grained assessment of your expected reliability. If that assessment leads me to expect that we are equally reliable at word problems, then I should return to giving our answers equal weight. It is as if the evidence has dropped back out of the story. If, instead, I conclude that I am better at word problems than you are, then I should not give your view equal weight, but then we would no longer be locked in peer disagreement. In this localized context, I would have concluded that you are not my peer. Either way, the evidence does not really drop out but instead gets assimilated into other measures.
If we stipulate that you and I are equally reliable agents who share all the same information, and we stipulate that we are assessing reliability in a sufficiently fine-grained way, then disagreement between us rationally requires giving our views equal weight. The character of the evidence plays a role twice here, both in explaining our disagreement and in calibrating our fine-grained reliabilities.
This solution to the neglected-evidence objection shows why the Generality Problem for peer disagreement must be solved by using fine-grained estimates of expected reliability. The solution in fact sets a minimal condition on fine-grainedness: expected reliability must be measured finely enough as to allow us to adhere to the Calibration Rule. My judgment that the two of us are peers, in terms of our expected reliability, should lead to a credence of 0.5 in cases of disagreement over p if and only if my reflecting on the evidence does not lead me to a credence in p that diverges from my expected reliability. Where it does, I need to recalibrate whether we are peers by attempting a more localized estimate of reliability—of both mine and yours. Only once I have reached a sufficiently fine-grained assessment of whether you and I are peers in this particular kind of situation, given this particular sort of evidence, do I have good reason to give your view equal weight.