«1 veiled disagreement VEILED DISAGREEMENT * H ow I should weigh my disagreement with you depends at least in part on how reliable I take you to be. ...»
VEILED DISAGREEMENT *
ow I should weigh my disagreement with you depends at
least in part on how reliable I take you to be. But comparisons of reliability are tricky: people seem reliable and
turn out not to be; they prove reliable in some ways but not others.
A theory of how rationally to respond to disagreement requires a
clear account of how to measure comparative reliability. Here I show
how such an account can be had by drawing on the contractualist strategy of reaching moral and political agreement by imposing restrictions on the information available to disputants. A rational response to disagreement requires considering all the particular details of the dispute at hand, but behind a veil of ignorance that precludes awareness of one’s own position in the debate. Imposing the right sort of veil resolves several of the leading puzzles that confront existing theories of disagreement. It also sheds an interesting light on the very different ways in which disagreement gets resolved in epistemology versus political theory, raising troubling questions for both fields.
i. the generality problem The most widely discussed thesis concerning disagreement—and the thesis that I will defend—insists that rationality requires an attitude of impartiality: giving equal consideration to one’s opponent and oneself. Equal consideration for the views of others does not always require equal credence: if I judge myself more reliable than you, then I should give more credence to my views. But impartiality requires not favoring my own views just because they are mine, or just because they seem true to me. Impartiality entails that when two seemingly equally reliable agents disagree in some domain where they are equally well informed, the rational course of action is for each agent to give equal weight to the other’s view. It is easy to see how rationality might seem to require this, but also easy to see that the consequences of such a policy would be startling, inasmuch as there seem to be many everyday circumstances in life (religion, politics, philosophy, and so on) where we are disposed to maintain our beliefs even in the face of intelligent, well-informed opposition.
* I owe thanks for their help to Adam Hosein, Alison Jaggar, Bradley Monton, Michael Tooley, the editors of this journal, and to an audience at the CU-Boulder Center for Values and Social Policy.
ã 2014 The Journal of Philosophy, Inc.
0022-362X/14/0000/001–023 Master Proof JOP 587 2 the journal of philosophy Let us refer to this equal-consideration doctrine as the thesis of Impartiality. Since the prima facie plausibility of Impartiality as a principle of epistemic rationality is obvious, and has been argued for in detail by others, I will take it for granted here as my starting point.1 The thesis is, however, highly controversial, despite its intuitive plausibility, because it is not clear that the consequences of Impartiality are ones we can live with. Part of what I seek to show, then, is why some of the worst apparent consequences of the thesis do not in fact arise.
The most discussed consequence of Impartiality is that in the special case of peer disagreement we should give equal weight to the views of our epistemic peers, and accordingly suspend our beliefs. Such an occurrence depends crucially on various details of how the situation is set out: one’s opponents must be equinumerous with oneself and one’s allies, the two sides must have equal and opposite confidence regarding the proposition in question, all parties must share the same information, and be equally reliable. The point is simply that Impartiality makes it rational to suspend belief in the face of disagreement only if the circumstances of the case are set out quite carefully and idealistically. It may accordingly be questioned whether Impartiality will actually make much difference in the real world— whether the allegedly startling consequences will ever actually obtain.
Of the various idealizations that Impartiality requires, most problematic is the demand of equal reliability: that the disagreement concern peers. Here the complaint has been often expressed that it is not even remotely realistic to suppose that two agents will be equally reliable cognitive agents, let alone that we could ever have any good basis for supposing someone else to be our peer in this way.2 This Prominent statements of the Impartiality thesis can be found in Richard Feldman, “Epistemological Puzzles about Disagreement,” in Stephen Hetherington, ed., Epistemology Futures (New York: Oxford, 2006), pp. 216–36; and in Adam Elga, “Reflection and Disagreement,” Noûs, xli, 3 (September 2007): 478–502. Elga there coins the often-used but potentially misleading label ‘equal-weight view’. Compare David Christensen, “Disagreement as Evidence: The Epistemology of Controversy,” Philosophy Compass, iv, 5 (September 2009): 756–67, who prefers to speak of “conciliatory” views. The term ‘conformism’ is favored by Jennifer Lackey, “A Justificationist View of Disagreement’s Epistemic Significance,” in Adrian Haddock, Alan Millar, and Duncan Pritchard, eds., Social Epistemology (New York: Oxford, 2010), pp. 298–325.
But she herself does not wholly endorse the view, for reasons considered below.
The term ‘Impartiality’ is my own.
For worries of this sort see Richard Feldman, “Evidentialism, Higher-Order Evidence, and Disagreement,” Episteme, vi, 3 (October 2009): 294–312, at pp. 300–01;
Bryan Frances, “The Reflective Epistemic Renegade,” Philosophy and Phenomenological Research, lxxxi, 2 (September 2010): 419–63; and, in detail, Nathan L. King, “Disagreement: What’s the Problem? or A Good Peer is Hard to Find,” Philosophy and Phenomenological Research, lxxxv, 2 (September 2012): 249–72. Here I limit the concept of epistemic peer
Master Proof JOP 587 veiled disagreement
line of objection can be quickly blocked, however, by expressing the theory in terms not of known reliability but of expected reliability.3 On this construal, you calculate your own expected reliability as a function of how likely you take it to be that you are reliable to this degree or that degree, and you calculate the reliability of your opponent in the same way. There is no need to suppose yourself and others to be exactly equally reliable; there is no need to know (or even have good reason for believing) just how reliable you and others are. It is enough to generate the problem of peer disagreement for you to be in a situation where you have no good reason to think yourself any more or less reliable than your opponent. Such situations seem common enough.
My concern will be with a deeper worry about equal reliability, a worry over scope. An agent’s cognitive reliability will vary over differences in subject matter, information, and conditions. We are all better or worse at getting at the truth in some domains than others, and so estimates of our reliability ought to vary widely, depending on how broadly or narrowly one looks. You and I may well have the same expected reliability writ large but very different expected reliabilities in particular domains. I may have better vision; you may have a better sense of smell. I may be better at geometry; you may be better at algebra. I may be sharper in the mornings; you may be sharper in the evenings. And on and on. This is what we might call the Generality Problem for peer disagreement.
Reliabilist theories of knowledge are famously prone to such trouble.4 A mathematician can perfectly well have a reliable beliefforming mechanism in her domain of expertise, and so have mathematical knowledge, but be hopeless when it comes to politics.
Her overall reliability across both domains is not the relevant measure—that would be too general a measurement. But it is equally problematic to focus too narrowly on the specific case in question.
Suppose Tommy is for the most part hopeless at mathematics but that he happens to get Problem #9 right—but only because it is a word problem, and because Tommy always chooses answer c when given word problems, and because as it happens the answer to cognitive reliability. More broadly, one might also include possessing the same information. Sameness of information, or sameness of evidence, is itself a problematic feature of the Impartiality thesis, but for purposes of this paper I am largely setting it aside.
Here I follow Roger White, “On Treating Oneself and Others as Thermometers,” Episteme, vi, 3 (October 2009): 233–50, at pp. 235–36.
See Earl Conee and Richard Feldman, “The Generality Problem for Reliabilism,” Philosophical Studies, lxxxix, 1 ( January 1998): 1–29.
to #9 is c. In cases of this exact kind Tommy is extremely reliable, because he reliably chooses c in such cases and because c is indeed the answer in all cases of this exact kind. Obviously, we are individuating cases too narrowly for a reliabilist theory of knowledge to be workable. No one would suppose that Tommy knows the answer to #9. But it is surprisingly hard to see what the correct level of generality is.
It is similarly unclear, at first glance, how to measure reliability in cases of peer disagreement. Descriptions of what it is to be an epistemic peer tend to restrict the scope of reliability to a particular domain but have paid little systematic attention to just what level of generality is called for. Obviously, some restriction is appropriate. If the disagreement concerns Brazilian politics, what matters is whether you and I are epistemic peers in that domain, not whether I am more reliable in the domain of baseball. But just how narrow should we go? Is the relevant measure of peerhood our reliabilities with regard to politics in general, or South American politics, or Brazilian politics? I hope to show that these questions admit of a fairly precise answer. Whereas a reliabilist theory of knowledge is under pressure to be neither too general nor too specific, assessments of disagreement should measure reliability as finely as possible, taking into account all of the relevant factors, and tailoring the assessment of reliability narrowly to the particular matter in dispute. If we disagree about whether it will snow here tomorrow, then we might begin by considering our respective overall reliabilities regarding weather forecasting. But if we discover that one of us is more reliable about next-day forecasts, or more reliable about snow, or more reliable about the weather here, then it is this information that matters. In general, what we want is the most narrowly tailored information available about one’s expertise regarding the particular case in question.
The significance of this conclusion—as well as its correctness—will emerge in the discussion to follow. But, as will also become clear, measuring reliability in the context of disagreement is not quite as straightforward as these initial remarks suggest, and it seems only fair to acknowledge some of the difficulties right away. One complication is that there will tend to be limits on how specifically we can gauge an agent’s reliability. If you and I disagree about the name of the current president of Brazil, then we would ideally like to know which of us is more reliable when it comes to this very topic, names of Brazilian presidents. But we may have to settle for a comparison of our more general reliabilities about world politics.
How do we know exactly what we have to settle for? One key here
Master Proof JOP 587 veiled disagreement
is to remember that the relevant issue is our expected reliability.
Since our overarching concern is to find the rational response to disagreement given the rest of what an agent believes, the relevant question for me to ask myself about reliability is this: what degree of reliability do I have reason to take myself to have here, and what degree do I have reason to take you to have? Although we would ideally like the most fine-grained assessment possible, I am likely to have information only of a more coarse-grained kind, based perhaps on memories of conversations with you about assorted matters of world politics. Such contingencies of available information complicate matters but also permit an approach to the generality problem that is not available to reliabilist theories of knowledge.
Because the reliabilist cares only about the external fact of reliability, there is no downward constraint on specificity, until we arrive at the limiting, uninformative case of reliability in this single instance. In contrast, because peer disagreement focuses on expected reliability, there will be downward constraints on specificity in any real-world case.5 A second complication concerns an essential qualification to the rule that we should consider reliability in the most fine-grained way available. If you and I disagree about the name of the Brazilian president, then the most fine-grained way for me to evaluate your reliability is in terms of how likely I take you to be right about this particular question. There, however, I assess your expected reliability as low, inasmuch as I think your answer is wrong. Obviously, though, that is too specific an assessment for present purposes, since it would blatantly beg the very question at issue—how much weight to give the contrary opinions of one’s peers. The solution must be somehow to measure reliability while setting aside that which is disputed. But it is not obvious how to maintain a sufficiently fine-grained approach when one leaves out of account everything the two sides disagree on. Here lies the heart of my concerns, and here is where it will be useful, in section iv, to move behind the veil of ignorance.