google-site-verification: google61e178fb7836a7e6.html

Week 5: Existential Risk 


“So if we drop the baton, succumbing to an existential catastrophe, we would fail our ancestors in a multitude of ways. We would fail to achieve the dreams they hoped for; we would betray the trust they placed in us, their heirs; and we would fail in any duty we had to pay forward the work they did for us. To neglect existential risk might thus be to wrong not only the people of the future, but the people of the past.”

- Toby Ord


One way to look for opportunities to accomplish as much good as possible is to ask, “Which developments might have an extremely large or irreversible impact on human civilization?” We think that existential risks (abbreviation: ‘x risks’) fit this description, and that acting to mitigate x risks - i.e., safeguarding humanity's future - is one way to achieve a huge amount of good.


Last week we covered the definition of an x risk, examined why x risks might be a moral priority, and began to look into why x risks are so neglected by society. This week we’ll explore what we think are the main candidates for x risk.

Organisation spotlight: Future of Humanity Institute

The Future of Humanity Institute (FHI) is a multidisciplinary research institute working on big picture questions for human civilisation and exploring what can be done now to ensure a flourishing long-term future.

Currently, Their four main research areas currently are:

  • Macrostrategy - investigating which crucial considerations are shaping what is at stake for the future of humanity

  • Governance of AI - understanding how geopolitics, governance structure, and strategic trends will affect the development of advanced artificial intelligence 

  • AI Safety - researching computer science techniques for building safer artificially intelligent systems 

  • Biosecurity - working with institutions around the world to reduce risks from especially dangerous pathogens

Organisation spotlight: Nuclear Threat Initiative

The Nuclear Threat Initiative (NTI) works to prevent catastrophic attacks of a nuclear, biological, radiological, chemical or cyber nature. Alongside other projects, they work with heads of state, scientists, and educators to develop policies to reduce reliance on nuclear weapons, prevent their use, and end them as a threat.

Organisation spotlight: Centre for Security and Emerging Technology

The Centre for Security and Emerging Technology (CSET) is a policy research organisation that produces data-driven research at the intersection of security and technology, providing nonpartisan analysis to the US policy community. 


They are currently focusing on the effects of progress in artificial intelligence, advanced computing and biotechnology. 


CSET is aiming to prepare the next generation of decision-makers to address the challenges and opportunities of emerging technologies. Their staff include renowned experts with experience directing intelligence and research operations at the National Security Council, the intelligence community and the Departments of Homeland Security, Defense and State.

Core Materials​

  • The Vulnerable World Hypothesis | Nick Bostrom | Ted2019 (Video - 20 mins.)

  • The Precipice, Chapter 4 - Anthropogenic Risks (1 hr.)

    • Just the sections on Nuclear Weapons and Climate Change, the section on Environmental Damage is recommended reading.

  • The Precipice,  Chapter 5 - Future Risks (1hr. 15 mins.)

    • Just the sections on Pandemics and Unaligned Artificial Intelligence, the sections on Dystopian Scenarios and Other Risks are recommended readings.

      • Definitional note: engineered pandemics and advanced artificial intelligence fall under the umbrella of ‘emerging technologies’.

Recommended reading ​

Criticisms of arguments for targeted existential risk reduction 

Criticisms of arguments for risks posed by emerging technologies

More to explore​

Global historical trends

Global governance and international peace


Nuclear Security

Climate Change


Shaping the development of artificial intelligence


Main Page
google-site-verification: google61e178fb7836a7e6.html