google-site-verification: google61e178fb7836a7e6.html

Week 2: Rationality and Good Judgement

How can we improve our decision-making?


Making big decisions about our career choices, cause prioritisation, donations or political actions is really hard. Deciding the global allocation of resources, and how to cooperate across countries on governing emerging technologies and ending poverty is even harder. These difficulties are compounded by systematic errors in the way we think about evidence, the consequences of our actions, and the likelihood of future events. In these situations, we face a lot of empirical uncertainty, where we don’t have a perfect model for how the world is and what the state of the world will become.


These difficulties don’t have to be terminal or lead to decision-paralysis. One approach in such situations is to develop a toolkit of techniques and heuristics to improve our judgement and understanding of the world, helping us decide how to act. This session explores a range of articles, blogs and videos that seem useful for developing these tools, hopefully leading to better decisions that improve the lives of others. Much of this literature comes from the fields of psychology and mathematics, and is discussed frequently within the rationality community.


While institutional decision-making is a cause area that many individuals in the EA community are interested in, this is outside the scope of this fellowship. We will be focusing on individual decision-making in this session.

Goals for this week

  • Start to explore the literature on how to have good judgement

  • Evaluate how useful these tools and heuristics are for making better decisions that affect the lives of others, including considerations for our ability to develop accurate models of the world and how future events may play out

  • Test these heuristics on our own empirical beliefs or uncertainties


Core Reading


[20m, video] Rationality and Effective Altruism

[5m] What Do We Mean By "Rationality"

Evidence and Arguments

[11m, video] A visual guide to Bayesian thinking

[6m] Deference for Bayesians

[10m] Making Beliefs Pay Rent (in Anticipated Experiences)


[10m] Efforts to Improve the Accuracy of Our Judgments and Forecasts

[5m] Beware the Inside View

[5m] Reference class forecasting


Exercise: Grappling with uncertainty

The aim of this exercise is to put probabilistic judgement into practice. We’ll think back to the key questions in our cause prioritisation and reframe them as forecasting or empirical questions, then try to answer them and see if we can get a useful forecast or estimate for that problem.


You’ll look back at your cruxes or uncertainties from the last session, and try to operationalise them into empirical questions that you might be able to make progress on. [Operationalization is the process of strictly defining variables into measurable factors. The process defines fuzzy concepts and allows them to be measured, empirically and quantitatively.] For example:

Uncertainty: I don’t know what the long term effects of global health interventions are. If they are bad, I’d be less likely to work on them.

Question: What is the probability that increasing economic growth increases existential risk? Or, how do problems like the meat eater problem affect the cost effectiveness of global health interventions quantitatively?


Uncertainty: I don’t know if animals feel pain. If they did, and it was comparable to human pain, I’d prioritise animal welfare more than I do now.

Question: What’s the probability that animals feel pain?

These operationalisions don’t need to be forecasts (about the future), but just need to be empirical (could be confirmed by evidence). Some of your cause prioritisation uncertainties might be ethical. Don’t worry about these yet, as they’re not probably not suited to these techniques. We’ll come back to resolving ethical uncertainties next week.


Many of your cause prioritization uncertainties will be very hard to resolve, or even make a probabilistic judgement about. That’s completely fine, we don’t expect you to come up with a rigorous answer for any of these questions. The purpose of this exercise is to get practice making estimates and predictions about important questions. Also, often trying to make a prediction can be a helpful way to understand a problem. So, even if you don’t resolve an uncertainty, you might make progress on it. A good rule of thumb is to try and answer a question and if you find you can’t then lower your standards and try to answer in a vaguer less reliable way, keep going until you have some answer to work with!

[5-10m] Uncertainties into questions

Look at your cruxes or uncertainties from last session and try to operationalise each of them into empirical questions. If the uncertainties you listed for last week’s exercise don’t seem suitable for this exercise, try to come up with new uncertainties or list other empirical beliefs you have that you’d like to consider more probabilistically. You can skip the ethical uncertainties for this week.

[30-40m] Try to come up with an estimate for your questions

  • Can you break the question down into sub-questions? 

    • This is inspired by Fermi estimating, which involves trying to find quantitative estimates of complex questions. 

    • Breaking complex questions into more manageable sub-questions makes it easier to make progress on difficult questions, however, this does give you more questions that you need to evaluate.

  • What do the relevant experts think?

    • You might want to search on the EA forum, look at the work of OpenPhil, FHI, or do a quick search in academic papers. If there aren’t any experts on the topic, can you see what experts in adjacent fields think?

    • Can you trust expert intuition in this domain? In what ways could experts be systematically wrong here?

  • Is there a similar trend that you can extrapolate?

    • Many of the things we’re trying to forecast are unprecedented events, so finding similar trends can be very difficult. In this case, are there any trends that are less similar, but might still be somewhat informative?

    • Is there a relevant reference class for this type of question that you can use to begin to develop an outside view?

    • E.g If you're predicting how long it would take you to write an essay, maybe you can’t think about other times you’ve written an essay about that topic but can think about other similar topics you may have encountered.

  • How is this case different to the similar trends? What might the effect of these differences be?

    • Here you can use more of an inside view and exercise your own judgment on the specifics of your question. 

[20-30m] Other ways to approach uncertainties


  • Can you use cluster thinking to inform your estimate? 

    • Are there different perspectives or approaches that would be valuable to consider?

    • This is somewhat in contrast to the approach used in “Fermi Estimating” which is more similar to “sequence thinking.” 

  • Try to think of many weak arguments in favour and against the premise of your question(s) in 1.2. 

    • E.g. if your question is “what is the probability that animals feel pain?,” then try to list as many arguments as you can in favour of the claim that animals do feel pain, and as many arguments as you can against this claim. 

    • How independent of each other are these arguments? 

    • In what way does this update your answer? 

  • Assuming you strongly believe one way or the other that something is the case currently or will happen in the future, what anticipated experiences arise from this belief? 

    • What do you expect to happen in the future based on this belief, that wouldn’t happen if this was not true? 


Further Reading (optional)


Tips & Tricks 



Good Judgement


  • In defence of epistemic modesty Greg Lewis writes about both strong and weak forms of epistemic modesty, two ways to use other people's opinions as evidence and how to weigh them up against your own opinion. Greg argues for a strong modesty and addresses some of the common objections to this. (Forum post - 45 min.)

  • Common sense as a prior Nick Beckstead attempts to justify the claim that we should “Believe what you think a broad coalition of trustworthy people would believe if they were trying to have accurate views and they had access to your evidence.” (Forum post - 45 min.)

  • Inadequate Equilibria Eliezer Yudkowsky writes about when to think you can outperform other people, when not to be modest. He argues that with this sort of thinking you can identify places where civilisation might be wrong on a topic. (Book - 8 chapter.)


Models, uncertainty, and cluster thinking

  • Some thoughts on deference and inside-view models Buck Shlegeris gives his thoughts on uncertainty. Claiming that we should be aware of the holes in our arguments without necessarily throwing them away. (Forum post - 20 min.)

  • Probing the Improbable In this paper researchers from the Future of Humanity Institute argue that when estimating low probabilities (such as existential risk) our low estimated probabilities are dominated by our uncertainty and the estimate is unreliable. (Paper - 40 min.)

  • Explanation freeze “You're about to learn an important trick for more accurately figuring out the truth when you're in uncertain circumstances. Without it, you may end up misinterpreting other people's behavior, obsessing over unimportant matters, and ignoring potential problems that you might otherwise be able to catch early on.” (ClearerThinking exercise - 30 min.)

  • Forecasting and predictions - Predicting what intervention will produce a positive impact using only expected consensus or deductive reasoning can be hard. Prediction markets and forecasting tournaments pose a potential solution. This panel will analyze how these tools can be used in the domain of cause prioritization. (Video - 1hr.) 

  • Sequence thinking vs. cluster thinking (10 mins - stick to the introduction)



  • Why you think you're right -- even if you're wrong Julia Galef gives a TED Talk on soldier and scout mindsets. Soldiers are prone to defending their viewpoint, whereas scouts are motivated by curiosity. Julia gives a historical example of these mindsets and argues that we should move away from our instinct to be a soldier and towards the need to be a scout. (Video - 12 min.)

  • Interpreting Evidence mini-course from ClearerThinking trains you to think more clearly about evidence.

  • Scope Insensitivity Eliezer Yudkowsky writes about scope insensitivity. A bias which causes us to undervalue increases of size of a problem to the point that people will pay approximately the same even if a problem is 10, 100, or 1000 times bigger. (Forum post - 3 min.) 

  • Cognitive biases and irrationality Lucius Caviola gives a brief description of the idea of a cognitive bias, and an explanation of why they exist. He also claims “all is not lost” and gives his suggestion of how to deal with bias. (Blog post - 3 min.)

  • Cognitive bias codex A public resource which maps all wikipedia articles on cognitive biases. (Concept map - 5 min.)

  • Practical debiasing This blog post shares some research and common wisdom about reducing bias. Some solutions which seem good at first glance hold up poorly to research, however some tools have been developed to tackle specific biases. (Blog post - 10 min.)

  • Cognitive Biases Potentially Affecting Judgment of Global Risks This report by the Machine Intelligence Research Institute (MIRI) gives an overview of a few cognitive biases in the context of risk of global extinction. (Paper - 1 hour.)

google-site-verification: google61e178fb7836a7e6.html