Accuracy-first epistemology

A veritist’s reply to the Swamping Problem

Veritism says that the fundamental source of epistemic value for a doxastic state is the extent to which it represents the world correctly—that is, its fundamental epistemic value is determined entirely by its truth or falsity. The Swamping Problem says that Veritism is incompatible with two pre-theoretic beliefs about epistemic value (Zagzebski 2003, Kvanvig 2003):

  1. a true justified belief is more (epistemically) valuable than a true unjustified belief;
  2. a false justified belief is more (epistemically) valuable than a false unjustified belief.

In this paper, I consider the Swamping Problem from the vantage point of decision theory. I note that the central premise in the argument is what Stefansson and Bradley (2015) call Chance Neutrality in Richard Jeffrey’s decision-theoretic framework. And I describe their argument that it should be rejected. Using this insight, I respond to the Swamping Problem on behalf of the veritist.

PDF

What is justified credence?

In this paper, we seek a reliabilist account of justified credence. Reliabilism about justified beliefs comes in two varieties: process reliabilism (Goldman, 1979, 2008) and indicator reliabilism (Alston, 1988, 2005). Existing accounts of reliabilism about justified credence comes in the same two varieties: Jeff Dunn’s is a version of process reliabilism (Dunn, 2015) while Weng Hong Tang offers a version of indicator reliabilism (Tang, 2016). As we will see, both face the same objection. If they are right about what justification is, it is mysterious why we care about justification, for neither of the accounts explains how justification is connected to anything of epistemic value. We will call this the Connection Problem. I begin by describing Dunn’s process reliabilism and Tang’s indicator reliabilism. I argue that, understood correctly, they are, in fact, extensionally equivalent. That is, Dunn and Tang reach the top of the same mountain, albeit by different routes. However, I argue that both face the Connection Problem. In response, I offer my own version of reliabilism, which is both process and indicator, and I argue that it solves that problem. Furthermore, I show that it is also extensionally equivalent to Dunn’s reliabilism and Tang’s. Thus, I reach the top of the same mountain as well.

PDF

An accuracy-dominance argument for conditionalization (with R. A. Briggs)

Noûs

Epistemic decision theorists aim to justify Bayesian norms by arguing that these norms further the goal of epistemic accuracy—having beliefs that are as close as possible to the truth. The standard defense of probabilism appeals to accuracy-dominance: for every belief state that violates the probability calculus, there is some probabilistic belief state that is more accurate, come what may. The standard defense of conditionalization, on the other hand, appeals to expected accuracy: before the evidence is in, one should expect to do better by conditionalizing than by following any other rule. We present a new argument for conditionalization that appeals to accuracy-dominance, rather than expected accuracy. Our argument suggests that conditionalization is a rule of diachronic coherence: failing to conditionalize is not just a bad response to the evidence; it is also inconsistent.

PDF

Epistemic Utility and the Normativity of Logic

Logos and Episteme

How does logic relate to rational belief? Is logic normative for belief, as some say? What, if anything, do facts about logical consequence tell us about norms of doxastic rationality? In this paper, we consider a range of putative logic-rationality bridge principles. These purport to relate facts about logical consequence to norms that govern the rationality of our beliefs and credences. To investigate these principles, we deploy a novel approach, namely, epistemic utility theory. That is, we assume that doxastic attitudes have different epistemic value depending on how accurately they represent the world. We then use the principles of decision theory to determine which of the putative logic-rationality bridge principles we can derive from considerations of epistemic utility.

PDF

Book symposium on Accuracy and the Laws of Credence

Philosophy and Phenomenological Research

  • Précis and replies (PDF)
  • Contribution from R. A. Briggs
  • Contribution from Jim Joyce
  • Contribution from Matt Kotzen

On the accuracy of group credences

in Szabó Gendler, T. & J. Hawthorne (eds.) Oxford Studies in Epistemology volume 6

We often ask for the opinion of a group of individuals. How strongly does the scientific community believe that the rate at which sea levels are rising increased over the last 200 years? How likely does the UK Treasury think it is that there will be a recession if the country leaves the European Union? What are these group credences that such questions request? And how do they relate to the individual credences assigned by the members of the particular group in question? According to the credal judgment aggregation principle, Linear Pooling, the credence function of a group should be a weighted average or linear pool of the credence functions of the individuals in the group. In this paper, I give an argument for Linear Pooling based on considerations of accuracy. And I respond to two standard objections to the aggregation principle.

PDF

Making things right: the true consequences of decision theory in epistemology

in Ahlstrom-Vij, K. & J. Dunn (eds.) Epistemic Consequentialism (Oxford: Oxford University Press)

In his 1998 paper, ‘A Nonpragmatic Vindication of Probabilism’, Jim Joyce offered a novel argument for the credal principle of Probabilism. In this paper, I consider an objection to Joyce’s argument that has been raised by Hilary Greaves (‘Epistemic Decision Theory’, Mind, 2013); and I try to answer that objection.

PDF

Book symposium on Accuracy and the Laws of Credence

Episteme 14(1):1-69

  • Précis and replies (PDF)
  • Contribution from Fabrizio Cariani (PDF)
  • Contribution from Sophie Horowitz (PDF)
  • Contribution from Ben Levinstein (PDF)
  • Contribution from Julia Staffel (PDF)

The population ethics of belief: in search of an epistemic Theory X

Noûs doi: 10.1111/nous.12164

Consider Phoebe and Daphne. Phoebe has credences in 1 million propositions. Daphne, on the other hand, has credences in all of these propositions, but she’s also got credences in 999 million other propositions. Phoebe’s credences are all very accurate in the following sense. Each of Daphne’s credences, in contrast, are not very accurate at all; each is a little more accurate than it is inaccurate, but not by much. Whose doxastic state is better, Phoebe’s or Daphne’s?

It is clear that this question is analogous to a question that has exercised ethicists over the past thirty years. How do we weigh apopulation consisting of some number of exceptionally happy and satisfied individuals against another population consisting of a much greater number of people whose lives are only just worth living? This is the question that occasions population ethics. In this paper, I go in search of the correct population ethics for credal states.

PDF, Journal

Jamesian epistemology formalised: an explication of ‘The Will to Believe’

Episteme 13(3):253-268

Famously, William James held that there are two commandments that govern our epistemic life: Believe truth! Shun error! In this paper, I give a formal account of James’ claim using the tools of epistemic utility theory. I begin by giving the account for categorical doxastic states – that is, full belief, full disbelief, and suspension of judgment. Then I will show how the account plays out for graded doxastic states – that is, credences. The latter part of the paper thus answers a question left open in (Pettigrew 2014).

PDF, Journal

Accuracy and the Laws of Credence (Oxford: Oxford University Press)

In this book, we offer an extended investigation into a particular way of justifying the rational principles that govern our credences (or degrees of belief). The main principles that we justify are the central tenets of Bayesian epistemology, though we meet many other related principles along the way. These are: Probabilism, the claims that credences should obey the laws of probability; the Principal Principle, which says how credences in hypotheses about the objective chances should relate to credences in other propositions; the Principle of Indifference, which says that, in the absence of evidence, we should distribute our credences equally over all possibilities we entertain; and Conditionalization, the Bayesian account of how we should plan to respond when we receive new evidence. Ultimately, then, the book is a study in the foundations of Bayesianism.

To justify these principles, we look to decision theory. We treat an agent’s credences as if they were a choice she makes between different options. We give an account of the purely epistemic utility enjoyed by different sets of credences. And we appeal to the principles of decision theory to show that, when epistemic utility is measured in this way, the credences that violate the principles listed above are ruled out as irrational. The account of epistemic utility we give is the veritist’s: the sole fundamental source of epistemic utility for credences is their accuracy. Thus, this is an investigation in the version of epistemic utility theory known as accuracy-first epistemology. The book can also be read as an extended reply on behalf of the veritist to the evidentialist’s objection that veritism cannot account for certain evidential principles of credal rationality, such as the Principal Principle, the Principle of Indifference, and Conditionalization.

Publisher, Webpage

Accuracy, Risk, and the Principle of Indifference

Philosophy and Phenomenological Research 92(1):35-59

In Bayesian epistemology, the problem of the priors is this: How should we set our credences (or degrees of belief) in the absence of evidence? That is, how should we set our prior or initial credences, the credences with which we begin our credal life? The Principle of Indifference gives a very restrictive answer. It demands that such an agent divide her credences equally over all possibilities. That is, according to the Principle of Indifference, only one initial credence function is permissible, namely, the uniform distribution. In this paper, we offer a novel argument for the Principle of Indifference. I call it the Argument from Accuracy.

PDF, Journal

Epistemic utility arguments for Probabilism (revised version)

in Zalta, E. (ed.) Stanford Encyclopedia of Philosophy

A survey article on epistemic utility arguments for Probabilism.

Website

Accuracy and the belief-credence connection

Philosophers’ Imprint 15(16):1-20

Probabilism is the thesis that an agent is rational only if her credences are probabilistic. This paper will be concerned with what we might call the Accuracy Dominance Argument for Probabilism (Rosenkrantz, 1981; Joyce, 1998, 2009). In this paper, I wish to identify and explore a lacuna in this argument that arises for those who take there to be (at least) two sorts of doxastic states: beliefs and credences.

PDF, Journal

Accuracy and Evidence

Dialectica 67(4):579-96

In ‘A Nonpragmatic Vindication of Probabilism’, Jim Joyce argues that our credences should obey the axioms of the probability calculus by showing that, if they don’t, there will be alternative credences that are guaranteed to be more accurate than ours. But it seems that accuracy is not the only goal of credences: there is also the goal of matching one’s credences to one’s evidence. I will consider four ways in which we might make this latter goal precise: on the first, the norms to which this goal gives rise act as ‘side constraints’ on our choice of credences; on the second, matching credences to evidence is a goal that is weighed against accuracy to give the overall cognitive value of credences; on the third, as on the second, proximity to the evidential goal and proximity to the goal of accuracy are both sources of value, but this time they are incomparable; on the fourth, the evidential goal is not an independent goal at all, but rather a byproduct of the goal of accuracy. All but the fourth way of making the evidential goal precise are pluralist about credal virtue: there is the virtue of being accurate and there is the virtue of matching the evidence and neither reduces to the other. The fourth way is monist about credal virtue: there is just the
virtue of being accurate. The pluralist positions lead to problems for Joyce’s argument; the
monist position avoids them. I endorse the latter

PDF, Journal

A New Epistemic Utility Argument for the Principal Principle

Episteme 10(1):19-35

Jim Joyce has presented an argument for Probabilism based on considerations of epistemic utility. In a recent paper, I adapted this argument to give an argument for Probablism and the Principal Principle based on similar considerations. Joyce’s argument assumes that a credence in a true proposition is better the closer it is to maximal credence, whilst a credence in a false proposition is better the closer it is to minimal credence. By contrast, my argument in that paper assumed (roughly) that a credence in a proposition is better the closer it is to the objective chance of that proposition. In this paper, I present an epistemic utility argument for Probabilism and the Principal Principle that retains Joyce’s assumption rather than the alternative I endorsed in the earlier paper. I argue that this results in a superior argument for these norms.

PDF, Journal

Epistemic utility and norms for credence

Philosophy Compass 8(10):897-908

Beliefs come in different strengths. An agent’s credence in a proposition is a measure of the strength of her belief in that proposition. Various norms for credences have been proposed. Traditionally, philosophers have tried to argue for these norms by showing that any agent who violates them will be lead by her credences to make bad decisions. In this article, we survey a new strategy for justifying these norms. The strategy begins by identifying an epistemic utility function and a decision-theoretic norm; we then show that the decision-theoretic norm applied to the epistemic utility function yields the norm for credences that we wish to justify. We survey results already obtained using this strategy, and we suggest directions for future research.

PDF, Journal

Introducing…Epistemic Utility Theory

The Reasoner 7(1):10-11

A very brief overview of accuracy-based arguments for credal principles.

PDF

Accuracy, Chance, and the Principal Principle

Philosophical Review 121(2):241-275

In “A Nonpragmatic Vindication of Probabilism,” James M. Joyce attempts to “depragmatize” de Finetti’s prevision argument for the claim that our credences ought to satisfy the axioms of the probability calculus. This article adapts Joyce’s argument to give nonpragmatic vindications of David Lewis’s original Principal Principle as well as recent reformulations due to Ned Hall and Jenann Ismael. Joyce enumerates properties that a function must have if it is to measure the distance from a set of credences to a set of truth values; he shows that, on any such measure, and for any set of credences that violates the probability axioms, there is a set that satisfies those axioms that is closer to every possible set of truth values. This article replaces truth values with objective chances in this argument; it shows that for any set of credences that violates the probability axioms or the Principal Principle, there is a set that satisfies both that is closer to every possible set of objective chances and similarly for Ned Hall’s New Principle and Jenann Ismael’s Generalized Principal Principle. Along the way, the article provides new arguments for some of Joyce’s central conditions on distance measures, and it answers two pressing objections to Joyce’s strategy.

PDF, Journal

An Improper Introduction to Epistemic Utility Theory

in Regt, Henk de, Stephan Hartmann, and Samir Okasha (eds.) EPSA Philosophy of Science: Amsterdam 2009 (Springer)

A survey of accuracy-based arguments for Probabilism and Conditionalization.

PDF

An Objective Justification of Bayesianism II: The Consequences of Minimizing Inaccuracy (with Hannes Leitgeb)

Philosophy of Science 77: 236-272 (Chosen for the Philosophers’ Annual 2010)

In this article and its prequel, we derive Bayesianism from the following norm: Accuracy—an agent ought to minimize the inaccuracy of her partial beliefs. In the prequel, we make the norm mathematically precise; in this article, we derive its consequences. We show that the two core tenets of Bayesianism follow from Accuracy, while the characteristic claim of Objective Bayesianism follows from Accuracy together with an extra assumption. Finally, we show that Jeffrey Conditionalization violates Accuracy unless Rigidity is assumed, and we describe the alternative updating rule that Accuracy mandates in the absence of Rigidity.

PDF, Journal

An Objective Justification of Bayesianism I: Measuring Inaccuracy (with Hannes Leitgeb)

Philosophy of Science 77: 201-235

In this article and its sequel, we derive Bayesianism from the following norm: Accuracy—an agent ought to minimize the inaccuracy of her partial beliefs. In this article, we make this norm mathematically precise. We describe epistemic dilemmas an agent might face if she attempts to follow Accuracy and show that the only measures of inaccuracy that do not create these dilemmas are the quadratic inaccuracy measures. In the sequel, we derive Bayesianism from Accuracy and show that Jeffrey Conditionalization violates Accuracy unless Rigidity is assumed. We describe the alternative updating rule that Accuracy mandates in the absence of Rigidity

PDF, Journal

Modelling Uncertainty: Review essay on Huber, F. and C. Schmidt-Petri (eds.) Degrees of Belief

Grazer Philosophische Studien 80: 309-316

The book under review provides a stimulating, informative, and focussed collection of new articles that survey a topic in formal epistemology that is fast becoming one of the central topics in mainstream epistemology. The twelve articles, as well as Huber’s excellent Introduction, address the following two questions:

  1. How should we model or represent an agent’s epistemic state?
  2. What constraints does rationality impose on an agent’s epistemic states thus
    modelled?

I treat each of these questions in turn, and conclude with a detailed consideration of an argument from Joyce’s article.

PDF, Journal