Papers by year with abstracts

Other repositories:

In preparation

On justifying an account of moral goodness to each individual: contractualism, utilitarianism, and prioritarianism

Many welfarists wish to assign to each possible state of the world a numerical value that measures something like its moral goodness. How are we to determine this quantity? This paper proposes a contractualist approach: a legitimate measure of moral goodness is one that could be justified to each member of the population in question. How do we justify a measure of moral goodness to each individual? Each individual recognises the measure of moral goodness must be a compromise between the different levels of well-being within the population. Some compromises are more reasonable than others; and some are better justifiable to a given member of the population than others. Each member recognises that the social chooser’s measure of moral goodness is going to have to deviate from the well-being function of at least some of the members of the population. But we can nonetheless justify it to each of them if it doesn’t deviate more than is necessary, and if the deviations from each member are given equal weighting in whatever process we use to determine it. This paper proposes that we begin with a measure of the distance from a proposed compromise to an individual’s level of well-being, and then say that the moral goodness is the candidate compromise that minimizes the sum of distances from it to the individuals’ levels of well-being. I describe a range of such measures of distance and show that some give utilitarianism, others different versions of prioritarianism.

PDF

What is the characteristic wrong of testimonial injustice?

My aim in this paper is to identify the wrong that is done in all cases of testimonial injustice, if there is one. Miranda Fricker (2007) proposes one account of this distinctive wrong, and Gaile Pohlhaus Jr. (2014) offers another. I think neither works. Nor does an account based on giving due respect to the testifier’s epistemic competence. Nor does an account based on exposing the testifier to substantial risk of harm. Rachel Fraser (2023) describes a further account, and the proposal I favour is a slight amendment of this.

PDF

Three questions for liberals

In this paper, I ask three questions of the liberal. In each, I fill in philosophical detail around a certain sort of complaint raised in current public debates about their position. In the first, I probe the limits of the liberal’s tolerance for civil disobedience; in the second, I ask how the liberal can adjudicate the most divisive moral disputes of the age; and, in the third, I suggest the liberal faces a problem when there is substantial disagreement about the boundaries of the rational and the reasonable.

PDF

When are choices, actions, and consent based on adaptive preferences nonautonomous?

Adaptive preferences give rise to puzzles in ethics, political philosophy, decision theory, and the theory of action. Like our other preferences, adaptive preferences lead us to make choices, take action, and give consent. In ‘False Consciousness for Liberals’, recently published in The Philosophical Review, David Enoch (2020) proposes a criterion by which to identify when these choices, actions, and acts of consent are less than fully autonomous; that is, when they suffer from what Natalie Stoljar (2014) calls an ‘autonomy deficit’. According to Enoch, such actions are not protected in the usual way against interference by others; there is not the same prohibition against trying to prevent someone from acting in a particular way when that action is motivated by such adaptive preferences and is an attempt to satisfy them. In this note, I raise two concerns about Enoch’s criterion.

PDF

Taking a Good look at the norms of gathering and responding to evidence

In the recent philosophical literature on inquiry, epistemologists point out that their subject has often begun at the point at which you already have your evidence and then focussed on identifying the beliefs for which that evidence provides justification. But we are not mere passive recipients of evidence. While some comes to us unbidden, we often actively collect it. This has long been recognised, but typically epistemologists have taken the norms that govern inquiry to be practical, not epistemic. The recent literature challenges this assumption and uncovers a rich range of questions about the epistemic normativity of inquiry. In this paper, I approach these questions from the formal side of epistemology. Developing out of the philosophy of science, as it did, this branch of epistemology has long discussed inquiry. And, building on the insights of David Blackwell (1951) and I. J. Good (1967), it has produced a reasonably well-developed framework in which to understand norms of inquiry, both epistemic and practical. In the first half of the paper, I will present the pragmatic versions of this framework due to Blackwell and Good, and the epistemic version due to Wayne Myrvold (2012); in the second half of the paper, I put this framework to work, turning to some of the questions from the recent debate about inquiry and asking how the Blackwell-Good-Myrvold approach can help us answer them. Questions will include: Are there purely epistemic norms that govern these actions (Flores and Woodard forthcoming)? When should we initiate an inquiry, when should we continue it, when should we conclude it, and when should we reopen it? How should we understand Julia Staffel’s distinction between transitional attitudes and terminal attitudes (Staffel 2021a,b)? How do epistemic norms of inquiry relate to epistemic norms of belief or credence, and can they conflict (Friedman 2020)? And how should we understand the epistemic error that occurs when someone is resistant to evidence (Simion 2023)?

PDF

On Choosing How to Choose

A decision theory is self-recommending if, when you ask it which decision theory you should use, it considers itself to be among the permissible options. I show that many alternatives to expected utility theory are not self-recommending, and I argue that this tells against them.

PDF, Mathematica notebook

Pooling, Products, and Priors

(with Jonathan Weisberg) We often learn the opinions of others without hearing the evidence on which they’re based. The orthodox Bayesian response is to treat the reported opinion as evidence itself and update on it by conditionalizing. But sometimes this isn’t feasible. In these situations, a simpler way of combining one’s existing opinion with opinions reported by others would be useful, especially if it yields the same results as conditionalization. We will show that one method—upco, also known as multiplicative pooling—is specially suited to this role when the opinions you wish to pool concern hypotheses about chances. The result has interesting consequences: it addresses the problem of disagreement between experts; and it sheds light on the social argument for the uniqueness thesis.

PDF

Believing is said of groups in many ways (and so it should be said of them in none)

In the first half of this paper, I argue that group belief ascriptions are highly ambiguous. What’s more, in many cases, neither the available contextual factors nor known pragmatic considerations are sufficient to allow the audience to identify which of the many possible meanings is intended. In the second half, I argue that this ambiguity often has bad consequences when a group belief ascription is heard and taken as testimony. And indeed it has these consequences even when the ascription is true on the speaker’s intended interpretation, when the speaker does not intend to mislead and indeed intends to cooperatively inform, and when the audience incorporates the evidence from the testimony as they should. I conclude by arguing that these consequences should lead us to stop using such ascriptions.

PDF

Forthcoming

Jeffrey Pooling (with Jonathan Weisberg)

Philosophers’ Imprint

How should your opinion change in response to the opinion of an epistemic peer? We show that the pooling rule known as “upco” is the unique answer satisfying some natural desiderata. If your revised opinion will influence your opinions on other matters by Jeffrey conditionalization, then upco is the only standard pooling rule that ensures the order in which peers are consulted makes no difference. Popular proposals like linear pooling, geometric pooling, and harmonic pooling cannot boast the same. In fact, no alternative to upco can if it possesses four minimal properties which these proposals share.

PDF

Geometric Pooling: A User’s Guide (with Jonathan Weisberg)

The British Journal for the Philosophy of Science

Much of our information comes to us indirectly, in the form of conclusions others have drawn from evidence they gathered. When we hear these conclusions, how can we modify our own opinions so as to gain the benefit of their evidence? In this paper we study the method known as geometric pooling. We consider two arguments in its favour, raising several objections to one, and proposing an amendment to the other.

PDF

Should longtermists recommend hastening extinction rather than delaying it?

The Monist

Longtermism is the view that the most urgent global priorities, and those to which we should devote the largest portion of our current resources, are those that focus on ensuring a long future for humanity, and perhaps sentient or intelligent life more generally, and improving the quality of those lives in that long future. The central argument for this conclusion is that, given a fixed amount of a resource that we are able to devote to global priorities, the longtermist’s favoured interventions have greater expected goodness than each of the other available interventions, including those that focus on the health and well-being of the current population. In this paper, I argue that, even granting the longtermist’s axiology and their consequentialist ethics, we are not morally required to choose whatever option maximises expected utility, and may not be permitted to do so. Instead, if their axiology and consequentialism is correct, we should choose using a decision theory that is sensitive to risk, and allows us to give greater weight to worse-case outcomes than expected utility theory. And such decision theories do not recommend longtermist interventions. Indeed, sometimes, they recommend hastening human extinction. Many, though not all, will take this as a reductio of the longtermist’s axiology or consequentialist ethics. I remain agnostic on the conclusion we should draw.

PDF

Consequences of Calibration (with J. R. G. Williams)

The British Journal for the Philosophy of Science

In this paper, we offer a new set of axioms that characterise the epistemic utility functions that are most often used in arguments in favour of Bayesian norms such as Probabilism and Conditionalization. These are the additive and continuous strictly proper epistemic utility functions. Our characterization is based on a suggestion by Frank P. Ramsey and appeals to the virtue of calibration. We begin by describing Ramsey’s proposal and making our characterization precise; then we answer objections inspired by other treatments of calibration in epistemic utility theory.

PDF

Formal Methods

Cambridge Handbook of Analytic Philosophy (edited by Marcus Rossberg)

In this handbook entry, I survey the different ways in which formal mathematical methods have been applied to philosophical questions throughout the history of analytic philosophy. I consider: formalization in symbolic logic, with examples such as Aquinas’ third way and Anselm’s ontological argument; Bayesian confirmation theory, with examples such as the fine-tuning argument for God and the paradox of the ravens; foundations of mathematics, with examples such as Hilbert’s programme and Gödel’s incompleteness theorems; social choice theory, with examples such as Condorcet’s paradox and Arrow’s theorem; ‘how possibly’ results, with examples such as Condorcet’s jury theorem and recent work on intersectionality theory; and the application of advanced mathematics in philosophy, with examples such as accuracy-first epistemology.

PDF

2023

Credences

in Routledge Encyclopedia of Philosophy

Online, PDF

Review of Carol J. Adams, Alice Crary, and Lori Gruen (eds.) The Good It Promises, the Harm It Does: Critical Essays on Effective Altruism

Mind

PDF

How should your beliefs change when your awareness grows?

Episteme (Open Access)

Epistemologists who study partial beliefs, or credences, have a well-developed account of how you should change your credences when you learn new evidence; that is, when your body of evidence grows. What’s more, they boast a diverse range of epistemic and pragmatic arguments that support that account. But they do not have a satisfactory account of when and how you should change your credences when you become aware of possibilities and propositions you have not entertained before; that is, when your awareness grows. In this paper, I consider each of the arguments for the credal epistemologist’s account of how to respond to evidence, and I ask whether they can help us generate an account of how to respond to awareness growth. The results are surprising: the arguments that all support the same norms for responding to evidence growth support a number of different norms when they are applied to awareness growth. Some of these norms seem too weak, others too strong. I ask what we should conclude from this, and argue that our credal response to awareness growth is considerably less rigorously constrained than our credal response to new evidence.

PDF

Nudging for Changing Selves

Synthese (special issue on Transformative Experience, Authenticity, and Rationality)

When is it legitimate for a government to ‘nudge’ its citizens, in the sense described by Richard Thaler and Cass Sunstein (Thaler & Sustein 2008)? In their original work on the topic, Thaler and Sunstein developed the ‘as judged by themselves‘ (or AJBT) test to answer this question (5, Thaler & Sunstein 2008). In a recent paper, L. A. Paul and Sunstein (Paul & Sunstein ms) raised a concern about this test: it often seems to give the wrong answer in cases in which we are nudged to make a decision that leads to what Paul calls a personally transformative experience, that is, one that results in our values changing (Paul 2014). In those cases, the nudgee will judge the nudge to be legitimate after it has taken place, but only because their values have changed as a result of the nudge. In this paper, I take up the challenge of finding an alternative test. I draw on my aggregate utility account of how to choose in the face of what Edna Ullmann-Margalit (2006) calls big decisions, that is, decisions that lead to these personally transformative experiences (Chapters 6 and 7, Pettigrew 2019).

PDF

2022

Autonomy for Changing Selves

Routledge Handbook of Autonomy (edited by Ben Colburn)

Our values change. What we value, want, desire, prefer, and how much; for nearly everyone, these will be different at different times in their life. These changes can be gradual or abrupt; they can be long-lasting or short-lived; and they can be induced by forces outside yourself or they can come from within or they can have no specific catalyst at all. Such preference change raises a number of questions for our theorising about rational choice, and these have been discussed at length. In §2 and §3, I’ll outline two of these questions along with some of the putative solutions that have been proposed. But preference change also raises questions for our theorising about autonomy, and these have hardly been considered at all. In §4, I’ll outline three problems for personal autonomy; and in §5, I’ll outline one problem for political autonomy. In §6, I conclude.

PDF

Competing reasons, incomplete preferences, and framing effects

Behavioral and Brain Sciences (Open Access)

(Commentary on José Luis Bermúdez ‘Rational Framing Effects: A Multidisciplinary Case’)

The quasi-cyclical preferences that Bermúdez ascribes to Agamemnon and others in analogous situations do not best represent them. I offer two alternative accounts. One works best if the preference ordering is taken to be the agent’s personal betterness ordering of acts; the other works best if it is taken to provide a summary of the agent’s dispositions to act.

Journal

Aggregating agents with opinions about different propositions

Synthese (Open Access) 200(5):1-25

There are many reasons we might want to take the opinions of various individuals and pool them to give the opinions of the group they constitute. If all the individuals in the group have probabilistic opinions about the same propositions, there is a host of pooling functions we might deploy, such as linear or geometric pooling. However, there are also cases where different members of the group assign probabilities to different sets of propositions, which might overlap a lot, a little, or not at all. There are far fewer proposals for how to proceed in these cases, and those there are have undesirable features. I begin by considering four proposals and arguing that they don’t work. Then I’ll describe my own proposal, which is intended to cover the situation in which we want to pool the individual opinions in order to ascribe an opinion to the group considered as an agent in its own right.

PDF

Accuracy-first epistemology without Additivity

Philosophy of Science 89(1):128-151

Accuracy arguments for the core tenets of Bayesian epistemology differ mainly in the conditions they place on the legitimate ways of measuring the inaccuracy of our credences. The best existing arguments rely on three conditions: Continuity, Additivity, and Strict Propriety. In this paper, I show how to strengthen the arguments based on these conditions by showing that the central mathematical theorem on which each depends goes through without assuming Additivity.

PDF, Journal

2021

On the pragmatic and epistemic virtues of inference to the best explanation

Synthese (Open Access) 199: 12407–12438

In a series of papers over the past twenty years, and in a new book, Igor Douven (sometimes in collaboration with Sylvia Wenmackers) has argued that Bayesians are too quick to reject versions of inference to the best explanation that cannot be accommodated within their framework. In this paper, I survey Douven’s worries and attempt to answer them using a series of pragmatic and purely epistemic arguments that I take to show that Bayes’ Rule really is the only rational way to respond to your evidence.

PDF, Journal

Radical epistemology, structural explanations, and epistemic weaponry

Philosophical Studies (Open Access) 179 (1): 289-304

When is a belief justified? There are three families of arguments we typically use to support different accounts of justification: (i) arguments from our intuitive responses to vignettes that involve the concept; (ii) arguments from the theoretical role we would like the concept to play in epistemology; and (iii) arguments from the practical, moral, and political uses to which we wish to put the concept. I focus particularly on the third sort (iii), and specifically on an argument of this sort offered by Clayton Littlejohn (2012) and Amia Srinivasan (2018) in favour of externalism. I counter Srinivasan’s argument in two ways: (a) first, I show that the internalist’s concept of justification might figure just as easily in the sorts of structural explanation Srinivasan thinks our political goals require us to give; and (b) I argue that the internalist’s concept is needed for a particular political task, namely, to help us build more effective defences against what I call epistemic weapons. I conclude that we should adopt an Alstonian pluralism about the concept of justification.

PDF, Journal

Bayesian updating when what you learn might be false

Erkenntnis doi: 10.1007/s10670-020-00356-8 (Open Access)

Michael Rescorla (2020) has recently pointed out that the standard arguments for Bayesian Conditionalization assume that, whenever I become certain of something, it is true. Most people would reject this assumption. In response, Rescorla offers an improved Dutch Book argument for Bayesian Conditionalization that does not make this assumption. My purpose in this paper is two-fold. First, I want to illuminate Rescorla’s new argument by giving a very general Dutch Book argument that applies to many cases of updating beyond those covered by Conditionalization, and then showing how Rescorla’s version follows as a special case of that. Second, I want to show how to generalise R. A. Briggs and Richard Pettigrew’s Accuracy Dominance argument to avoid the assumption that Rescorla has identified (Briggs & Pettigrew, 2018). In both cases, these arguments proceed by first establishing a very general reflection principle.

Journal, PDF

2020

Logical ignorance and logical learning

Synthese 198(10): 9991-10020 (Open Access)

According to certain normative theories in epistemology, rationality requires us to be logically omniscient. Yet this prescription clashes with our ordinary judgments of rationality. How should we resolve this tension? In this paper, I focus particularly on the logical omniscience requirement in Bayesian epistemology. Building on a key insight by Ian Hacking (1967), I develop a version of Bayesianism that permits logical ignorance. This includes an account of the synchronic norms that govern a logically ignorant individual at any given time, as well as an account of how we reduce our logical ignorance by learning logical facts and how we should update our credences in response to such evidence. At the end, I explain why the requirement of logical omniscience remains true of ideal agents with no computational, processing, or storage limitations.

PDF, Journal

A Note on Deterministic Updating and van Fraassen’s symmetry argument for Conditionalization

Philosophical Studies 178(2):665-673 (Open Access)

In a recent paper, I argue that the pragmatic and epistemic arguments for Bayesian updating are based on an unwarranted assumption, which I call Deterministic Updating, and which says that your updating plan should be deterministic. In that paper, I did not consider whether the symmetry arguments due to Hughes and van Fraassen make the same assumption (Hughes & van Fraassen 1984; van Fraassen 1987). In this note, I show that they do.

PDF, Video, Journal

Transformative experience and the knowledge norms for action: Moss on Paul’s challenge to decision theory

in Lambert, E. and J. Schwenkler (eds.) Becoming Someone New: Essays on Experience, Choice, and Change (OUP)

L. A. Paul (2014, 2015) argues that the possibility of epistemically transformative experiences poses serious and novel problems for the orthodox theory of rational choice, namely, expected utility theory — I call her argument the Utility Ignorance Objection. In a pair of earlier papers, I responded to Paul’s challenge (Pettigrew 2015, 2016), and a number of other philosophers have responded in similar ways (Dougherty, et al. 2015, Harman 2015) — I call our argument the Fine-Graining Response.  Paul has her own reply to this response, which we might call the Authenticity Reply. But Sarah Moss has recently offered an alternative reply to the Fine-Graining Response on Paul’s behalf (Moss 2017) — we’ll call it the No Knowledge Reply. This appeals to the knowledge norm of action, together with Moss’ novel and intriguing account of probabilistic knowledge. In this paper, I consider Moss’ reply and argue that it fails. I argue first that it fails as a reply made on Paul’s behalf, since it forces us to abandon many of the features of Paul’s challenge that make it distinctive and with which Paul herself is particularly concerned. Then I argue that it fails as a reply independent of its fidelity to Paul’s intentions.

PDF, Book

2019

Internalism, externalism, and the KK principle (with Alexander Bird)

Erkenntnis 86: 1713–1732 (Open Access)

This paper examines the relationship between the KK principle and the epistemological theses of externalism and internalism. There is often thought to be a very close relationship between externalism and the rejection of the KK principle and between internalism and its acceptance. How strong are the connections? The stronger proposals are: externalism entails the denial of the KK principle; internalism entails the truth of the KK principle. We will consider a number of problems for the theses as stated; we will present two ways of amending them so that they avoid these problems.

PDF, Journal

What is conditionalization, and why should we do it?

Philosophical Studies 177(11), 3427-3463 (Open Access)

Conditionalization is one of the central norms of Bayesian epistemology. But there are a number of competing formulations, and a number of arguments that purport to establish it. In this paper, I explore which formulations of the norm are supported by which arguments. In their standard formulations, each of the arguments I consider here depends on the same assumption, which I call Deterministic Updating. I will investigate whether it is possible to amend these arguments so that they no longer depend on it. As I show, whether this is possible depends on the formulation of the norm under consideration.

PDF, Journal, Video

On the Expected Utility Objection to the Dutch Book Argument for Probabilism

Noûs 55(1):23-38

The Dutch Book Argument for Probabilism assumes Ramsey’s Thesis (RT), which purports to determine the prices an agent is rationally required to pay for a bet. Recently, a new objection to Ramsey’s Thesis has emerged (Hedden 2013, Wronski & Godziszewski 2017, Wronski 2018)–I call this the Expected Utility Objection. According to this objection, it is Maximise Subjective Expected Utility (MSEU) that determines the prices an agent is required to pay for a bet, and this often disagrees with Ramsey’s Thesis. I suggest two responses to Hedden’s objection. First, we might be permissive: agents are permitted to pay any price that is required or permitted by RT, and they are permitted to pay any price that is required or permitted by MSEU. This allows us to give a revised version of the Dutch Book Argument for Probabilism, which I call the Permissive Dutch Book Argument. Second, I suggest that even the proponent of the Expected Utility Objection should admit that RT gives the correct answer in certain very limited cases, and I show that, together with MSEU, this very restricted version of RT gives a new pragmatic argument for Probabilism, which I call the Bookless Pragmatic Argument.

PDF, Journal, Video

Veritism, Epistemic Risk, and the Swamping Problem

Australasian Journal of Philosophy 97(4): 761-774

Veritism says that the fundamental source of epistemic value for a doxastic state is the extent to which it represents the world correctly—that is, its fundamental epistemic value is determined entirely by its truth or falsity. The Swamping Problem says that Veritism is incompatible with two pre-theoretic beliefs about epistemic value (Zagzebski 2003, Kvanvig 2003):

  1. a true justified belief is more (epistemically) valuable than a true unjustified belief;
  2. a false justified belief is more (epistemically) valuable than a false unjustified belief.

In this paper, I consider the Swamping Problem from the vantage point of decision theory. I note that the central premise in the argument is what Stefansson and Bradley (2015) call Chance Neutrality in Richard Jeffrey’s decision-theoretic framework. And I describe their argument that it should be rejected. Using this insight, I respond to the Swamping Problem on behalf of the veritist.

PDF, Journal

On the accuracy of group credences

in Szabó Gendler, T. & J. Hawthorne (eds.) Oxford Studies in Epistemology volume 6

We often ask for the opinion of a group of individuals. How strongly does the scientific community believe that the rate at which sea levels are rising increased over the last 200 years? How likely does the UK Treasury think it is that there will be a recession if the country leaves the European Union? What are these group credences that such questions request? And how do they relate to the individual credences assigned by the members of the particular group in question? According to the credal judgment aggregation principle, Linear Pooling, the credence function of a group should be a weighted average or linear pool of the credence functions of the individuals in the group. In this paper, I give an argument for Linear Pooling based on considerations of accuracy. And I respond to two standard objections to the aggregation principle.

PDF

2018

What is justified credence?

Episteme 18(1): 16-30

In this paper, we seek a reliabilist account of justified credence. Reliabilism about justified beliefs comes in two varieties: process reliabilism (Goldman, 1979, 2008) and indicator reliabilism (Alston, 1988, 2005). Existing accounts of reliabilism about justified credence comes in the same two varieties: Jeff Dunn’s is a version of process reliabilism (Dunn, 2015) while Weng Hong Tang offers a version of indicator reliabilism (Tang, 2016). As we will see, both face the same objection. If they are right about what justification is, it is mysterious why we care about justification, for neither of the accounts explains how justification is connected to anything of epistemic value. We will call this the Connection Problem. I begin by describing Dunn’s process reliabilism and Tang’s indicator reliabilism. I argue that, understood correctly, they are, in fact, extensionally equivalent. That is, Dunn and Tang reach the top of the same mountain, albeit by different routes. However, I argue that both face the Connection Problem. In response, I offer my own version of reliabilism, which is both process and indicator, and I argue that it solves that problem. Furthermore, I show that it is also extensionally equivalent to Dunn’s reliabilism and Tang’s. Thus, I reach the top of the same mountain as well.

PDF, Journal

What we talk about when we talk about numbers

Annals of Pure and Applied Logic doi: 10.1016/j.apal.2018.08.009

In this paper, I describe and motivate a new species of mathematical structuralism, which I call Instrumental Nominalism about Set-Theoretic Structuralism. As the name suggests, this approach takes standard Set-Theoretic Structuralism of the sort championed by Bourbaki and removes its ontological commitments by taking an instrumental nominalist approach to that ontology of the sort described by Joseph Melia and Gideon Rosen. I argue that this avoids all of the problems that plague other versions of structuralism.

PDF, Journal

An accuracy-dominance argument for conditionalization (with R. A. Briggs)

Noûs doi: 10.1111/nous.12258

Epistemic decision theorists aim to justify Bayesian norms by arguing that these norms further the goal of epistemic accuracy—having beliefs that are as close as possible to the truth. The standard defense of probabilism appeals to accuracy-dominance: for every belief state that violates the probability calculus, there is some probabilistic belief state that is more accurate, come what may. The standard defense of conditionalization, on the other hand, appeals to expected accuracy: before the evidence is in, one should expect to do better by conditionalizing than by following any other rule. We present a new argument for conditionalization that appeals to accuracy-dominance, rather than expected accuracy. Our argument suggests that conditionalization is a rule of diachronic coherence: failing to conditionalize is not just a bad response to the evidence; it is also inconsistent.

PDF, Journal

Making things right: the true consequences of decision theory in epistemology

in Ahlstrom-Vij, K. & J. Dunn (eds.) Epistemic Consequentialism (Oxford: Oxford University Press) 220-240.

In his 1998 paper, ‘A Nonpragmatic Vindication of Probabilism’, Jim Joyce offered a novel argument for the credal principle of Probabilism. In this paper, I consider an objection to Joyce’s argument that has been raised by Hilary Greaves (‘Epistemic Decision Theory’, Mind, 2013); and I try to answer that objection.

PDF, Book webpage

Book symposium on Accuracy and the Laws of Credence

Philosophy and Phenomenological Research 96(3):749-754; 784-800

  • Précis (PDF)
  • Contribution from R. A. Briggs (website)
  • Contribution from Jim Joyce (website)
  • Contribution from Matt Kotzen (website)
  • Replies (PDF)

Review of Normal Decisions by Edna Ullmann-Margalit, edited by Avishai Margalit and Cass R. Sunstein

Notre Dame Philosophical Reviews

PDF, Webpage

2017

The Principal Principle does not imply the Principle of Indifference

The British Journal for the Philosophy of Science doi: 10.1093/bjps/axx060

In a recent paper in the British Journal for the Philosophy of Science, James Hawthorne, Jürgen Landes, Christian Wallmann, and Jon Williamson (henceforth HLWW) argue that the Principal Principle entails the Principle of Indifference. In this paper, I argue that it does not. Lewis’ version of the Principal Principle notoriously depends on a notion of admissibility, which Lewis uses to restrict its application. HLWW do not give a precise account either, but they do appeal to two principles concerning admissibility, which they call Condition 1 and Condition 2. There are two ways of reading their argument, depending on how you understand the status of Conditions 1 and 2. Reading 1: The correct account of admissibility is determined independently of these two principles, and yet these two principles follow from that correct account. Reading 2: The correct account of admissibility is determined in part by these two principles, so that the principles follow from that account but only because the correct account is constrained so that it must satisfy them. HLWW then show that, given an account of admissibility on which Conditions 1 and 2 hold, the Principal Principle entails the Principle of Indifference. I will argue that, on either reading of the argument, it fails. I will argue that there is a plausible account of admissibility on which Conditions 1 and 2 are false. That defeats the first reading of the argument. I will then argue that the intuitions that lead us to assent to Condition 2 also lead us to assent to other very closely related principles that are inconsistent with Condition 2. This, I claim, casts  doubt on the reliability of those intuitions, and thus removes our justification for Condition 2. This defeats the second reading of the HLWW argument. Thus, the argument fails.

PDF, Journal

Aggregating incoherent agents who disagree

Synthese doi: 10.1007/s11229-017-1613-7 (Open Access)

In this paper, we explore how we should aggregate the degrees of belief of a group of agents to give a single coherent set of degrees of belief, when at least some of those agents might be probabilistically incoherent. There are a number of way of aggregating degrees of belief, and there are a number of ways of fixing incoherent degrees of belief. When we have picked one of each, should we aggregate first and then fix, or fix first and then aggregate? Or should we try to do both at once? And when do these different procedures agree with one another? In this paper, we focus particularly on the final question.

PDF, Journal

Epistemic Utility and the Normativity of Logic

Logos and Episteme VII(4):455-492

How does logic relate to rational belief? Is logic normative for belief, as some say? What, if anything, do facts about logical consequence tell us about norms of doxastic rationality? In this paper, we consider a range of putative logic-rationality bridge principles. These purport to relate facts about logical consequence to norms that govern the rationality of our beliefs and credences. To investigate these principles, we deploy a novel approach, namely, epistemic utility theory. That is, we assume that doxastic attitudes have different epistemic value depending on how accurately they represent the world. We then use the principles of decision theory to determine which of the putative logic-rationality bridge principles we can derive from considerations of epistemic utility.

PDF, Journal

Book symposium on Accuracy and the Laws of Credence

Episteme 14(1):1-69

  • Précis and replies (PDF)
  • Contribution from Fabrizio Cariani (PDF)
  • Contribution from Sophie Horowitz (PDF)
  • Contribution from Ben Levinstein (PDF)
  • Contribution from Julia Staffel (PDF)

2016

Illness as transformative experience (with Havi Carel and Ian James Kidd)

The Lancet 388(10050):1152-53

Imagine that you need to decide whether to adopt a child or not. It’s the only avenue to parenthood that is open to you. If you adopt a child, you will become a parent. You will experience the (currently unknown) highs and lows of being a parent. If you decide not to adopt, you will never know what being a parent is like. The decision you are asked to make is doubly risky. This problem has been discussed recently by philosopher L A Paul in her book Transformative Experience. Paul suggests that experiences such as becoming a parent are doubly transformative. First, they are epistemically transformative: you can only learn what it is like to be a parent by becoming one. Second, experiences such as becoming a parent are existentially transformative: you don’t know how such an experience will change you and your preferences. We suggest that serious illness is a transformative experience and that Paul’s framework usefully characterises central aspects of it.

PDF, Journal

The population ethics of belief: in search of an epistemic Theory X

Noûs doi: 10.1111/nous.12164

Consider Phoebe and Daphne. Phoebe has credences in 1 million propositions. Daphne, on the other hand, has credences in all of these propositions, but she’s also got credences in 999 million other propositions. Phoebe’s credences are all very accurate in the following sense. Each of Daphne’s credences, in contrast, are not very accurate at all; each is a little more accurate than it is inaccurate, but not by much. Whose doxastic state is better, Phoebe’s or Daphne’s?

It is clear that this question is analogous to a question that has exercised ethicists over the past thirty years. How do we weigh apopulation consisting of some number of exceptionally happy and satisfied individuals against another population consisting of a much greater number of people whose lives are only just worth living? This is the question that occasions population ethics. In this paper, I go in search of the correct population ethics for credal states.

PDF, Journal

Jamesian epistemology formalised: an explication of ‘The Will to Believe’

Episteme 13(3):253-268

Famously, William James held that there are two commandments that govern our epistemic life: Believe truth! Shun error! In this paper, I give a formal account of James’ claim using the tools of epistemic utility theory. I begin by giving the account for categorical doxastic states – that is, full belief, full disbelief, and suspension of judgment. Then I will show how the account plays out for graded doxastic states – that is, credences. The latter part of the paper thus answers a question left open in (Pettigrew 2014).

PDF, Journal

Remaking the elite university: An experiment in widening participation in the UK (with Josie McLellan and Tom Sperlinger)

Power and Education 8(1):54-72

This article analyses and critiques the discourse around widening participation in elite universities in the UK. One response, from both university administrators and academics, has been to see this as an ‘intractable’ problem which can at best be ameliorated through outreach or marginal work in admissions policy. Another has been to reject the institution of the university completely, and seek to set up alternative models of autonomous higher education. The article presents a different analysis, in which the university is still seen as central and participation is seen as an aspect of pedagogy rather than as an administrative process. This is illustrated through a description of how a Foundation Year in Arts and Humanities was conceived, designed and implemented at the University of Bristol. This model is used to consider the problems, risks and successes in challenging received notions of how (and whether) widening participation can be achieved, and whether it can reach those who are currently most excluded from elite universities, such as those without
qualifications. The article suggests how academics can utilise their expertise to solve key challenges faced by universities and reclaim autonomy in central aspects of university administration. At the same time, it demonstrates how change to the current model of student recruitment can also bring welcome – and transformative – change to the nature of elite higher education institutions in the UK and elsewhere

PDF, Journal

Accuracy, Risk, and the Principle of Indifference

Philosophy and Phenomenological Research 92(1):35-59

In Bayesian epistemology, the problem of the priors is this: How should we set our credences (or degrees of belief) in the absence of evidence? That is, how should we set our prior or initial credences, the credences with which we begin our credal life? The Principle of Indifference gives a very restrictive answer. It demands that such an agent divide her credences equally over all possibilities. That is, according to the Principle of Indifference, only one initial credence function is permissible, namely, the uniform distribution. In this paper, we offer a novel argument for the Principle of Indifference. I call it the Argument from Accuracy.

PDF, Journal

Review of John P. Burgess’ Rigor and Structure

Philosophia Mathematica 24(1):129-136

In this review, I focus on the possibility of giving a precise account of informal mathematical proof; and the lesson that Burgess draws from the indifference that mathematicians have towards questions about the subject matter of their discipline.

PDF, Journal

Review of L. A. Paul’s Transformative Experience

Mind 125(499):927-935

In this review, I focus mainly on Paul’s own solution to the problems that she raises for orthodox decision theory; and I consider the possibility of an alternative solution, which I originally proposed in ‘Transformative Experience and Decision Theory’ (Philosophy and Phenomenological Research, 2014).

PDF, Journal

2015

Risk, rationality, and expected utility theory

Canadian Journal of Philosophy 45(5-6): 798-826

There are decision problems where the preferences that seem rational to many people cannot be accommodated within orthodox decision theory in the natural way. In response, a number of alternatives to the orthodoxy have been proposed. In this paper, I offer an argument against those alternatives and in favour of the orthodoxy. I focus on preferences that seem to encode sensitivity to risk. And I focus on the alternative to the orthodoxy proposed by Lara Buchak’s risk-weighted expected utility theory. I will show that the orthodoxy can be made to accommodate all of the preferences that Buchak’s theory can accommodate.

PDF, Journal

Epistemic utility arguments for Probabilism (revised version)

in Zalta, E. (ed.) Stanford Encyclopedia of Philosophy

A survey article on epistemic utility arguments for Probabilism.

Website

Transformative experience and decision theory

Philosophy and Phenomenological Research 91(3):766-774. (Contribution to book symposium on L. A. Paul’s Transformative Experience)

I have never eaten Vegemite—should I try it? I currently have no children—should I apply to adopt a child? In each case, one might imagine, whichever choice I make, I can make it rationally by appealing to the principles of decision theory. Not always, says L. A. Paul. In Transformative Experience, Paul issues two challenges to decision theory based upon examples such as these. I will show how we might reformulate decision theory in the face of these challenges. Then I will consider the philosophical questions that remain after the challenges have been accommodated.

PDF, Journal

Pluralism about belief states

Proceedings of the Aristotelian Society (Supp. Vol.) 89(1):187-204 (Contribution to a symposium on Hannes Leitgeb’s Humean thesis on belief at the Joint Session of the Aristotelian Society and Mind Association 2015)

With his Humean thesis on belief, Leitgeb (2015) seeks to say how beliefs and credences ought to interact with one another. To argue for this thesis, he enumerates the roles beliefs must play and the properties they must have if they are to play them, together with norms that beliefs and credences intuitively must satisfy. He then argues that beliefs can play these roles and satisfy these norms if, and only if, they are related to credences in the way set out in the Humean thesis. I begin by raising questions about the roles that Leitgeb takes beliefs to play and the properties he thinks they must have if they are to play them successfully. After that, I question the assumption that, if there are categorical doxastic states at all, then there is just one kind of them—to wit, beliefs—such that the states of that kind must play all of these roles and conform to all of these norms. Instead, I will suggest, if there are categorical doxastic states, there may be many different kinds of such state such that, for each kind, the states of that type play some of the roles Leitgeb takes belief to play and each of which satisfies some of the norms he lists. As I will argue, the usual reasons for positing categorical doxastic states alongside credences all tell equally in favour of accepting a plurality of kinds of them. This is the thesis I dub pluralism about belief states.

PDF, Journal

Accuracy and the belief-credence connection

Philosophers’ Imprint 15(16):1-20

Probabilism is the thesis that an agent is rational only if her credences are probabilistic. This paper will be concerned with what we might call the Accuracy Dominance Argument for Probabilism (Rosenkrantz, 1981; Joyce, 1998, 2009). In this paper, I wish to identify and explore a lacuna in this argument that arises for those who take there to be (at least) two sorts of doxastic states: beliefs and credences.

PDF, Journal

What chance-credence norms should not be

Noûs 49(1):177-196

A chance-credence norm states how an agent’s credences in propositions concerning objective chances ought to relate to her credences in other propositions. The most famous such norm is the Principal Principle (PP), due to David Lewis. However, Lewis noticed that PP is too strong when combined with many accounts of chance that attempt to reduce chance facts to non-modal facts. Those who defend such accounts of chance have offered two alternative chance-credence norms: the first is Hall’s and Thau’s New Principle (NP); the second is Ismael’s General Recipe (IP). Thus, the question arises: Should we adopt NP or IP or both? In this paper, I argue that IP has unacceptable consequences when coupled with reductionism, so we must accept NP alone.

PDF, Journal

2014

Deference done right (with Mike Titelbaum)

Philosophers’ Imprint 14(35):1-19

There are many kinds of epistemic experts to which we might wish to defer in setting our credences. These include: highly rational agents, objective chances, our own future credences, our own current credences, and evidential (or logical) probabilities. But how, precisely, ought we defer to these experts? Exactly what constraint does a deference requirement place on an agent’s credences at a particular time?

In this paper we consider three possible answers, inspired by three different principles that have been proposed for deference to objective chances. We consider how these options fare when applied to the other kinds of epistemic experts mentioned above. Besides assuming a baseline probabilism about rational credences, we are particularly interested in the following two desiderata:

  • A deference principle should be consistent with both the agent’s and the experts’ updating by Conditionalization.
  • A deference principle should permit agents to have various kinds of doubts about what’s rationally required.

Of the three deference principles we consider, we argue that two of the options face insuperable difficulties meeting these desiderata. The third, on the other hand, fares well — at least when it is applied in a particular way.

PDF, Journal

Two types of abstraction for structuralism (with Øystein Linnebo)

Philosophical Quarterly 64(255):267-283

If numbers were identified with any of their standard set-theoretic realizations, then they would have various non-arithmetical properties that mathematicians are reluctant to ascribe to them. Dedekind and later structuralists conclude that we should refrain from ascribing to numbers such ‘foreign’ properties. We first rehearse why it is hard to provide an acceptable formulation of this conclusion. Then we investigate some forms of abstraction meant to purge mathematical objects of all ‘foreign’ properties. One form is inspired by Frege; the other by Dedekind. We argue that both face problems.

PDF, Journal

2013

Accuracy and Evidence

Dialectica 67(4):579-96

In ‘A Nonpragmatic Vindication of Probabilism’, Jim Joyce argues that our credences should obey the axioms of the probability calculus by showing that, if they don’t, there will be alternative credences that are guaranteed to be more accurate than ours. But it seems that accuracy is not the only goal of credences: there is also the goal of matching one’s credences to one’s evidence. I will consider four ways in which we might make this latter goal precise: on the first, the norms to which this goal gives rise act as ‘side constraints’ on our choice of credences; on the second, matching credences to evidence is a goal that is weighed against accuracy to give the overall cognitive value of credences; on the third, as on the second, proximity to the evidential goal and proximity to the goal of accuracy are both sources of value, but this time they are incomparable; on the fourth, the evidential goal is not an independent goal at all, but rather a byproduct of the goal of accuracy. All but the fourth way of making the evidential goal precise are pluralist about credal virtue: there is the virtue of being accurate and there is the virtue of matching the evidence and neither reduces to the other. The fourth way is monist about credal virtue: there is just the virtue of being accurate. The pluralist positions lead to problems for Joyce’s argument; the monist position avoids them. I endorse the latter.

PDF, Journal

A New Epistemic Utility Argument for the Principal Principle

Episteme 10(1):19-35

Jim Joyce has presented an argument for Probabilism based on considerations of epistemic utility. In a recent paper, I adapted this argument to give an argument for Probablism and the Principal Principle based on similar considerations. Joyce’s argument assumes that a credence in a true proposition is better the closer it is to maximal credence, whilst a credence in a false proposition is better the closer it is to minimal credence. By contrast, my argument in that paper assumed (roughly) that a credence in a proposition is better the closer it is to the objective chance of that proposition. In this paper, I present an epistemic utility argument for Probabilism and the Principal Principle that retains Joyce’s assumption rather than the alternative I endorsed in the earlier paper. I argue that this results in a superior argument for these norms.

PDF, Journal

Epistemic utility and norms for credence

Philosophy Compass 8(10):897-908

Beliefs come in different strengths. An agent’s credence in a proposition is a measure of the strength of her belief in that proposition. Various norms for credences have been proposed. Traditionally, philosophers have tried to argue for these norms by showing that any agent who violates them will be lead by her credences to make bad decisions. In this article, we survey a new strategy for justifying these norms. The strategy begins by identifying an epistemic utility function and a decision-theoretic norm; we then show that the decision-theoretic norm applied to the epistemic utility function yields the norm for credences that we wish to justify. We survey results already obtained using this strategy, and we suggest directions for future research.

PDF, Journal

Introducing…Epistemic Utility Theory

The Reasoner 7(1):10-11

A very brief overview of accuracy-based arguments for credal principles.

PDF

Review of Mark Colyvan’s An Introduction to the Philosophy of Mathematics

Bulletin of Symbolic Logic 19(3): 396-397

PDF, Journal

2012

Accuracy, Chance, and the Principal Principle

Philosophical Review 121(2):241-275

In “A Nonpragmatic Vindication of Probabilism,” James M. Joyce attempts to “depragmatize” de Finetti’s prevision argument for the claim that our credences ought to satisfy the axioms of the probability calculus. This article adapts Joyce’s argument to give nonpragmatic vindications of David Lewis’s original Principal Principle as well as recent reformulations due to Ned Hall and Jenann Ismael. Joyce enumerates properties that a function must have if it is to measure the distance from a set of credences to a set of truth values; he shows that, on any such measure, and for any set of credences that violates the probability axioms, there is a set that satisfies those axioms that is closer to every possible set of truth values. This article replaces truth values with objective chances in this argument; it shows that for any set of credences that violates the probability axioms or the Principal Principle, there is a set that satisfies both that is closer to every possible set of objective chances and similarly for Ned Hall’s New Principle and Jenann Ismael’s Generalized Principal Principle. Along the way, the article provides new arguments for some of Joyce’s central conditions on distance measures, and it answers two pressing objections to Joyce’s strategy.

PDF, Journal

Indispensability arguments and instrumental nominalism

Review of Symbolic Logic 5(4):687-709

In the philosophy of mathematics, indispensability arguments aim to show that we are justified in believing that mathematical objects exist on the grounds that we make indispensable reference to such objects in our best scientific theories (Quine, 1981a; Putnam, 1979a) and in our everyday reasoning (Ketland, 2005). I wish to defend a particular objection to such arguments called instrumental nominalism. Existing formulations of this objection are either insufficiently precise or themselves make reference to mathematical objects or possible worlds. I show how to formulate the position precisely without making any such reference. To do so, it is necessary to supplement the standard modal operators with two new operators that allow us to shift the locus of evaluation for a subformula. I motivate this move and give a semantics for the new operators.

PDF, Journal

Identity and Discernibility in Philosophy and Logic (with James Ladyman and Øystein Linnebo)

Review of Symbolic Logic 5(1):162-186

Questions about the relation between identity and discernibility are important both in philosophy and in model theory. We show how a philosophical question about identity and discernibility can be ‘factorized’ into a philosophical question about the adequacy of a formal language to the description of the world, and a mathematical question about discernibility in this language. We provide formal definitions of various notions of discernibility and offer a complete classification of their logical relations. Some new and surprising facts are proved; for instance, that weak discernibility corresponds to discernibility in a language with constants for every object, and that weak discernibility is the most discerning nontrivial discernibility relation.

PDF, Journal

2011

An Improper Introduction to Epistemic Utility Theory

in Regt, Henk de, Stephan Hartmann, and Samir Okasha (eds.) EPSA Philosophy of Science: Amsterdam 2009 (Springer)

A survey of accuracy-based arguments for Probabilism and Conditionalization.

PDF

Probability

in Horsten, L. and R. Pettigrew (eds.) Continuum Companion to Philosophical Logic (Continuum Press)

An introductory survey article on different interpretations of probability.

PDF

Category theory as an autonomous foundation (with Øystein Linnebo)

Philosophia Mathematica 19(3):227-254

Does category theory provide a foundation for mathematics that is autonomous with respect to the orthodox foundation in a set theory such as ZFC? We distinguish three types of autonomy: logical, conceptual, and justificatory. We argue that, while a strong case can be made for its logical and conceptual autonomy, its justificatory autonomy turns on whether or not mathematical theories can be justified by appeal to mathematical practice. If they can, a category-theoretical approach will be fully autonomous; if not, the most natural route to justificatory autonomy is blocked.

PDF, Journal

2010

An Objective Justification of Bayesianism II: The Consequences of Minimizing Inaccuracy (with Hannes Leitgeb)

Philosophy of Science 77: 236-272 (Chosen for the Philosophers’ Annual 2010)

In this article and its prequel, we derive Bayesianism from the following norm: Accuracy—an agent ought to minimize the inaccuracy of her partial beliefs. In the prequel, we make the norm mathematically precise; in this article, we derive its consequences. We show that the two core tenets of Bayesianism follow from Accuracy, while the characteristic claim of Objective Bayesianism follows from Accuracy together with an extra assumption. Finally, we show that Jeffrey Conditionalization violates Accuracy unless Rigidity is assumed, and we describe the alternative updating rule that Accuracy mandates in the absence of Rigidity.

PDF, Journal

An Objective Justification of Bayesianism I: Measuring Inaccuracy (with Hannes Leitgeb)

Philosophy of Science 77: 201-235

In this article and its sequel, we derive Bayesianism from the following norm: Accuracy—an agent ought to minimize the inaccuracy of her partial beliefs. In this article, we make this norm mathematically precise. We describe epistemic dilemmas an agent might face if she attempts to follow Accuracy and show that the only measures of inaccuracy that do not create these dilemmas are the quadratic inaccuracy measures. In the sequel, we derive Bayesianism from Accuracy and show that Jeffrey Conditionalization violates Accuracy unless Rigidity is assumed. We describe the alternative updating rule that Accuracy mandates in the absence of Rigidity

PDF, Journal

Modelling Uncertainty: Review essay on Huber, F. and C. Schmidt-Petri (eds.) Degrees of Belief

Grazer Philosophische Studien 80: 309-316

The book under review provides a stimulating, informative, and focussed collection of new articles that survey a topic in formal epistemology that is fast becoming one of the central topics in mainstream epistemology. The twelve articles, as well as Huber’s excellent Introduction, address the following two questions:

  1. How should we model or represent an agent’s epistemic state?
  2. What constraints does rationality impose on an agent’s epistemic states thus
    modelled?

I treat each of these questions in turn, and conclude with a detailed consideration of an argument from Joyce’s article.

PDF, Journal

The foundations of arithmetic in finite bounded Zermelo set theory

in Hinnion, R. and T. Libert (eds.) One Hundred Years of Axiomatic Set Theory, Cahiers du Centre de Logique 17: 99-118

In this paper, I pursue such a logical foundation for arithmetic in a variant of Zermelo set theory that has axioms of subset separation only for quantifier-free formulae, and according to which all sets are Dedekind finite. In section 2, I describe this variant theory. And in section 3, I sketch foundations for arithmetic in that theory and prove that certain foundational propositions that are theorems of the standard Zermelian foundation for arithmetic are independent of it. An equivalent theory of sets and an equivalent foundation for arithmetic was introduced by Mayberry and developed by the current author in his doctoral thesis. In that thesis and in the joint paper with Mayberry to which it gave rise, the independence results mentioned above are proved using proof-theoretic methods. In this paper, I offer model-theoretic proofs of the central independence results using the technique of cumulation models, which was introduced by Steve Popham, a doctoral student of Mayberry from the early 1980s.

PDF

2009

On interpretations of bounded arithmetic and bounded set theory

Notre Dame Journal of Formal Logic 50(2): 141-152

In ‘On interpretations of arithmetic and set theory’, Kaye and Wong proved the following result, which they considered to belong to the folklore of mathematical logic.

Theorem The first-order theories of Peano arithmetic and Zermelo-Fraenkel set theory with the axiom of infinity negated are bi-interpretable.

In this note, I describe a theory of sets that is bi-interpretable with the theory of bounded arithmetic, IΔ0 + exp. Because of the weakness of this theory of sets, I cannot straightforwardly adapt Kaye and Wong’s interpretation of the arithmetic in the set theory. Instead, I am forced to produce a different interpretation.

PDF, Journal

Aristotle on the subject matter of geometry

Phronesis 54: 239-260

I offer a new interpretation of Aristotle’s philosophy of geometry, which he presents in greatest detail in Metaphysics M 3. On my interpretation, Aristotle holds that the points, lines, planes, and solids of geometry belong to the sensible realm, but not in a straightforward way. Rather, by considering Aristotle’s second attempt to solve Zeno’s Runner Paradox in Book VIII of the Physics, I explain how such objects exist in the sensibles in a special way. I conclude by considering the passages that lead Jonathan Lear to his fictionalist reading of Met. M3,1 and I argue that Aristotle is here describing useful heuristics for the teaching of geometry; he is not pronouncing on the meaning of mathematical talk.

PDF, Journal

2008

Platonism and Aristotelianism in Mathematics

Philosophia Mathematica 16(3): 310-332

Philosophers of mathematics agree that the only interpretation of arithmetic that takes that discourse at ‘face value’ is one on which the expressions ‘N’, ‘0’, ‘1’, ‘+’, and ‘×’ are treated as proper names. I argue that the interpretation on which these expressions are treated as akin to free variables has an equal claim to be the default interpretation of arithmetic. I show that no purely syntactic test can distinguish proper names from free variables, and I observe that any semantic test that can must beg the question. I draw the same conclusion concerning areas of mathematics beyond arithmetic.

PDF, Journal

Drafts that will probably remain drafts

A pragmatic characterisation of linear pooling

How we should determine a group’s collective probabilistic judgments, given the probabilistic judgments of the individuals in the group? A standard answer is given by this condition: The group probability distribution over the propositions should be a weighted average of the individual probability distributions. Call this Linear Pooling. We provide a condition on aggregates that characterises linear pooling: Given a utility function shared by all members of the group, if each individual in the group expects one act to have greater utility than another, then the group expects the first act to have greater utility than the second. Call this Pareto. We prove that Linear Pooling and Pareto are equivalent.

PDF

Accuracy-domination arguments and credences as estimates of truth-values

Branden Fitelson has recently raised an intriguing objection (Fitelson, 2012) to Jim Joyce’s accuracy-domination arguments for probabilism (Joyce 1998, 2009). He adapts an objection raised by David Miller against accounts of verisimilitude that make it a measure of the accuracy of a theory’s predictions (Miller, 1975). As Joyce presents his accuracy domination argument, it is based on a conception of credences as estimates of truth-values; and Fitelson’s objection is based on an alleged analogy between an agent’s estimate of a quantity (such as a truth-value) and a scientific theory’s prediction of the value of a quantity. I will offer two responses to Fitelson’s objection: I will show that, even if the alleged analogy does hold, it does not undermine Joyce’s argument; then I will argue that the analogy does not hold.

PDF

Self-locating beliefs and the goal of accuracy

The goal of a partial belief is to be accurate, or close to the truth. By appealing to this norm, I seek norms for partial beliefs in self-locating and non-self-locating propositions. My aim is to find norms that are analogous to the Bayesian norms, which, I argue, only apply unproblematically to partial beliefs in non-self-locating propositions. I argue that the goal of a set of partial beliefs is to minimize the expected inaccuracy of those beliefs. However, in the self-locating framework, there are two equally legitimate definitions of expected inaccuracy. And, while each gives rise to the same synchronic norm for partial beliefs, they give rise to different, inconsistent diachronic norms. I conclude that both norms are rationally permissible. En passant, I note that this entails that both Halfer and Thirder solutions to the well-known Sleeping Beauty puzzle are rationally permissible.

PDF