google-site-verification: google61e178fb7836a7e6.html
 

Week 3: Moral Uncertainty & Maximisation

How can we make our actions robust to moral uncertainty?

Overview

In the previous week, we focused on improving the accuracy of our judgement in the face of empirical uncertainty. This week, we’ll be thinking about how we can make decisions given our uncertainty about what we should assign moral value to and what kinds of ethical systems we should use.

 

Arguments put forward in the EA community tend to use a consequentialist framework, whereby actions are evaluated on the outcomes they have. However, given that there is no consensus among philosophers on which ethical theory is “correct”, this suggests that we shouldn’t be overconfident or exclusive in applying a consequentialist or any other approach. 

 

Moreover, there is also a lack of consensus about how we should value different things. People disagree about how to value different beings compared to each other and will give different answers to the question, ‘Would you save 1 human or 10,000 pigs?’. People also disagree about how to value different outcomes relative to each other, for example, how to trade-off between improving the quality of lives versus saving lives. 

 

This lack of consensus suggests that we are making decisions under “moral uncertainty”. This week, we will be looking at how we can make judgements whilst taking this into account. We will also be questioning whether the rather core EA principle of aiming to maximise wellbeing is broadly supported across different ethical frameworks.

Goals for this week

  • Note that moral intuitions can conflict and practice resolving these conflicts

  • Consider whether the principle of maximisation is broadly supported across ethical frameworks

 

Core Reading

 

Preliminary

[20m] Normative Ethics (Webpage) - If you’re not already familiar with normative ethical theories, it will probably be helpful to read the whole section on Normative Ethics, from Virtue Theories up to and including Ethical Egoism and Social Contract Theory.

Moral uncertainty

[5m] Moral uncertainty

[5m] Practical ethics given moral uncertainty -  An EA forum post from Will MacAskill on the role of moral uncertainty in practical decisions and how this may be analogous to empirical uncertainty

Maximising wellbeing as part of Effective Altruism

Read at least one of:

[12m] Defining Effective Altruism A proposed definition of effective altruism from Will MacAskill, mentioning maximisation, and tentatively impartialism and welfarism, as key components (EA Forum Post - 12 mins.

[45m] The definition of effective altruism - a chapter from Effective Altruism: Philosophical Issues.

[10m] Scope-sensitive ethics: capturing the core intuition motivating utilitarianism

[30m] Rational Numbers: A Non-Consequentialist Explanation Of Why You Should Save The Many And Not The Few

Exercise: Grappling with ethical uncertainties

Identify the ethical questions

Think about your cruxes from the first week of the fellowship (or think of new ethical cruxes / uncertainties). Feel free to make use of cruxes other fellows raised in session one if you find these more compelling. Last week we focused on empirical uncertainties i.e. uncertainties about how the world works. Now we’ll focus on uncertainties that are ethical in nature. 

 

These uncertainties might take one of two forms: 

 

  1. Uncertainties about what we value (axiological uncertainty)

    • E.g. “Do I value this group?”, “How do I weigh up the needs of X rather than the needs of Y?”

  2. Uncertainties about how we should act (normative uncertainty) 

    • E.g. “Is it wrong to eat fish?” 

 

Look at your ethical cruxes or uncertainties from the first session and try to operationalise each of them into questions (5-10 mins). These can be questions either about what we value, or about how we should act. Try to come up with some in the latter category, as we will use these in the next exercise. 

 

Examples with uncertainties about what we value:

 

Uncertainty: If I believed people who suffer in the future are worth as much as people suffering now I would care more about the long term trajectory of humanity.

Question: How much do I discount happiness in the future compared to happiness now?

Uncertainty: If someone convinced me animal suffering was morally relevant I’d change my mind about factory farming.

Question: How sure am I that animal suffering is morally neutral?

Or

Are animals moral patients?

 

Examples with uncertainties about how we should act:

 

Uncertainty: I’m uncertain whether donating to global health charities is the most effective use of my resources from a longtermist perspective (which I largely hold). 

Question: Should I donate to global health charities?

 

Uncertainty: I’m uncertain if it’s morally permissible to continue eating meat if I offset it by donating to animal welfare charities.

Question: Is it morally permissible to eat meat if you offset the harm by donating to animal welfare charities? 

Addressing the ethical questions given moral uncertainty

The problem of making decisions in the face of moral uncertainty is subject to ongoing academic debate, as you can explore in the Further Reading if you choose to. In the second part of this week’s exercise, we’re going to look at two of the proposed models for making decisions under moral uncertainty. The models have been greatly simplified for the purpose of the exercise, but it will hopefully give you a taste of how these problems can be approached. 

We will now see how the different viewpoints would rank the possible actions according to their “choiceworthiness”, i.e. how appealing they seem as actions from that viewpoint. 

Here is a worked example for the same question: Is it morally permissible to eat meat if you offset the harm by donating to animal welfare charities?

 

  1. Think about the different actions you could take in response to this question.

    1. Either continue to eat meat and donate to animal welfare charities, or just donate to animal welfare charities. 

    2. Stop eating meat

    3. Stop eating meat and donate the money you would have spent on it to animal welfare charities 

    4. Eat meat and either do or don’t donate to animal welfare charities 

  2. How do the different viewpoints we considered rank these actions?

    1. Viewpoint 1: Animal suffering matters, but the suffering experienced by the animals I eat is outweighed by the expected reduction in their suffering caused by my donations.

      1. Ranking

        1. Either continue to eat meat and donate to animal welfare charities, or just donate to animal welfare charities. 

        2. Stop eating meat and donate the money you would have spent on it to animal welfare charities 

        3. Stop eating meat

        4. Eat meat and either do or don’t donate to animal welfare charities 

    2. Viewpoint 2: It is a moral obligation to avoid the unnecessary suffering of another sentient being.

      1. Ranking

        1. Stop eating meat OR Stop eating meat and donate the money you would have spent on it to animal welfare charities 

        2. Either continue to eat meat and donate to animal welfare charities, or just donate to animal welfare charities OR Eat meat and either do or don’t donate to animal welfare charities 

    3. Viewpoint 3: I have an obligation to increase the wellbeing of animals.

      1. Ranking

        1. Stop eating meat and donate the money you would have spent on it to animal welfare charities 

        2. Either continue to eat meat and donate to animal welfare charities, or just donate to animal welfare charities. 

        3. Stop eating meat

        4. Eat meat and either do or don’t donate to animal welfare charities 

    4. Viewpoint 4: The benefit to me from eating meat outweighs the cost to animals. 

      1. Ranking

        1. Eat meat and either do or don’t donate to animal welfare charities 

        2. Either continue to eat meat and donate to animal welfare charities, or just donate to animal welfare charities OR Stop eating meat and donate the money you would have spent on it to animal welfare charities OR Stop eating meat 

  3. Which action seems the most choiceworthy across the rankings?

    1. Stop eating meat and donate the money you would have spent on it to animal welfare charities

Work through this process using one of your uncertainties about how to act (10-15 mins.)

[Optional] Analyse the same decision using the parliamentary model

Consider one of your ethical questions about how we should act. Now we’re going to try using the parliamentary model to examine how you might answer this question from different viewpoints or ethical frameworks, and to come to a decision about how to act. 

 

Here is a worked example for the question: Is it morally permissible to eat meat if you offset the harm by donating to animal welfare charities?

 

  1. Think of a number of different viewpoints one could have in response to this question. 

    1. Viewpoint 1: Animal suffering matters, but the suffering experienced by the animals I eat is outweighed by the expected reduction in their suffering caused by my donations.

    2. Viewpoint 2: It is a moral obligation to avoid the unnecessary suffering of another sentient being.

    3. Viewpoint 3: I have an obligation to increase the wellbeing of animals. 

    4. Viewpoint 4: The benefit to me from eating meat outweighs the cost to animals. 

  2. Roughly estimate the extent to which you are convinced by each of the viewpoints. This will determine how many delegates for that viewpoint you send to the moral parliament.  

    1. Viewpoint 1: 60% convinced - 60/100 delegates 

    2. Viewpoint 2: 10% convinced - 10/100 delegates

    3. Viewpoint 3: 5% convinced - 5/100 delegates 

    4. Viewpoint 4: 25% convinced - 25/100 delegates 

  3. Two important things to know:

    1. In the parliament, the delegates believe that the probability of the parliament deciding to take action A is proportional to the fraction of votes for A. However, unbeknownst to the delegates, the Parliament always takes whichever action gets the most votes. 

    2. The parliament will make some concessions to delegates with a particular viewpoint if the issue being debated is especially important for that viewpoint.

  4. Imagine your delegates are debating the question. What do the different viewpoints vote that you do? Is the issue particularly important to any of the viewpoints, such that their opinions on this issue should be given extra weight?

    1. Viewpoint 1: Either continue to eat meat and donate to animal welfare charities, or just donate to animal welfare charities. (60 votes)

    2. Viewpoint 2: Stop eating meat (25 votes & the issue is especially important)

    3. Viewpoint 3: Stop eating meat and donate the money you would have spent on it to animal welfare charities (5 votes & the issue is especially important)

    4. Viewpoint 4: Eat meat and either do or don’t donate to animal welfare charities (25 votes)

  5. What do you ultimately decide to do?

    1. Stop eating meat and donate the money you would have spent on it to animal welfare charities (because Viewpoints 2 and 3 voted for this action on an issue which is especially important to them, and Viewpoint 1 also accepts ths action)

 

Further Reading

  • Stop the robot apocalypse - A review of William MacAskill’s Doing Good Better which explores a number of criticisms of EA, including criticisms of demandingness and impartiality (45 mins.)

  • Review and Summary of Moral Uncertainty A review and summary of the recently published 'Moral Uncertainty' by Will MacAskill, Krister Bykvist, and Toby Ord. (90 mins.)

  • Normative Uncertainty A thesis by Will MacAskill on his model of moral uncertainty and a ‘metanormative’ position, norms for metaethics. He proposes a model of maximising expected choiceworthyness in morally uncertain situations. (PhD Thesis)

  • Embracing the intellectual challenge of Effective Altruism “While it's easy to view the intellectual challenge of effective altruism as a liability, it is better to view it as an asset. In this talk from EA Global 2016, Michael Page lays out why effective altruism is hard, and how we can accept and appreciate that fact.” (EAG Talk - 20 min.)

  • Moral trade and effective altruism “A moral trade occurs when individuals with different values cooperate to produce an outcome that's better according to both their values than what they could have achieved individually.” (EAG Talk - 13 min.)

  • Moral Trade “If people have different resources, tastes, or needs, they may be able to exchange goods or services such that they each feel they have been made better off. This is trade. If people have different moral views, then there is another type of trade that is possible: they can exchange goods or services such that both parties feel that the world is a better place or that their moral obligations are better satisfied.” (Paper - 45 min.)

  • A bargaining-theoretic approach to moral uncertainty - “This paper explores a new approach to the problem of decision under relevant moral uncertainty.” (Paper - 30 mins.)

  • Fundamental value differences are not that fundamental (20 mins.)

  • The whole city is centre - “[S]ome of the simplest fake value differences are where people make a big deal about routing around a certain word. And some of the most complicated real value differences are where some people follow a strategy explicitly and other people follow heuristics that approximate that strategy.”(45 mins.)

  • Normative uncertainty as a voting problem - An argument that the problems of maximising expected choiceworthiness of actions can be overcome (Paper - 90 mins.)

  • Geometric reasons for normalising variance to aggregate preferences (Paper - 60 mins.)

  • Gains from trade through compromise - Making the case for why those who prioritise reducing suffering could benefit from trade and proposing ideas for how to encourage compromise among nations, ideologies, and individuals in the future (60 mins.)

  • Effective altruism and free riding - Arguing that the standard cause-prioritization methodology used within EA recommends to defect (“free-ride”) in prisoner’s dilemma settings (60 mins.)

  • Philosophical Critiques of Effective Altruism (Paper - 25 mins.)

  • Maximising expected value under axiological uncertainty - (PhD Thesis)

  • Value traps, and how to avoid them - (Talk - 30 mins.)

  • Fairness Presenting a theory about fairness as it applies to the distribution of goods between people (Paper - 30 mins.)

  • Rejecting Ethical Deflationism Arguing that we have strong reasons to reject theories which suggest no action or ethical theory is better than any other, even if we do not have reason to disbelieve them (Paper - 2 hrs.)

 
google-site-verification: google61e178fb7836a7e6.html