The Foundations of Longtermism

This is a research project funded by the Leverhulme Trust, hosted in the Department of Philosophy at the University of Bristol, and led by Richard Pettigrew.

It will run from 1st September 2024 until 31st August 2027.

The research team includes: Richard Pettigrew (project lead) and a postdoctoral researcher (to be recruited in April 2024). The job advert is here.

Ejiri in Suruga Province, Katsushika Hokusai

Summary of the project

Longtermism is a radical view about what morality requires. If you can either (i) reduce the probability of human extinction this century by one in a billion; or (ii) prevent the deaths of 1,000 people for sure, longtermism argues you should choose (i) because it produces the greatest goodness in expectation. In this project, we challenge this argument by asking how our attitudes to risk, and the attitudes of those our choice affects, should figure in moral decision-making, where an individual’s attitudes to risk determine how much weight they give to worse outcomes and how much to better ones.

Context for the project

Longtermism is a radical view about what morality requires of us. It has recently become popular in the effective altruism and global priorities movements, which investigate the ways in which we should direct our philanthropic giving as well as our large-scale policy initiatives. Prominent charity evaluators used to award the highest places in their rankings to charities that fund health interventions, such as anti-malarial bed nets. Recently, they have begun to recommend initiatives that are focussed on preventing human extinction in the short- to medium-term
future: initiatives that seek to reduce the threat of nuclear war by shoring up democratic institutions and institutions of international diplomacy, for instance.

Most of the arguments that support this new focus rely on a certain approach to moral decision-making. First, we say how we’re going to measure the goodness of a possible way the future might turn out. Second, we say that morality requires us to choose whichever intervention maximises goodness in expectation.

But why does morality require you to maximise goodness in expectation? Why does it not permit other ways of making decisions on the basis of the probabilities you assign to the different possible futures and the goodness of
those futures? Since the 1950s, we have known of many such alternative decision theories. These have come from rational choice theory, where we study not moral decision-making but individual prudential decision-making. In this project, we investigate the foundations of those alternative decision theories, and in particular those that permit different attitudes to risk; we ask which should be used for moral decision-making; and we ask what the effect is on the arguments for longtermism.