Donation Recommendations 2018

These donation recommendations follow from our focus on risks of astronomical suffering (s-risks). This focus is based on the view that the long-term future is overwhelmingly important from an altruistic point of view, and the belief that there is a significant chance that artificial intelligence will fundamentally transform human civilization in the near future. It implies, however, a slightly different emphasis and strategic outlook than a focus on the risk of human extinction, as we have explained elsewhere.

We believe there are two factors that determine whether you should prioritize mitigating s-risks:

  • How much more weight, if at all, do you put on suffering as opposed to happiness? All else equal, somebody who places more weight on reducing suffering should care more about s-risks than somebody who places less weight.1
  • How likely do you think s-risks are when compared to really positive future scenarios? All else equal, somebody who thinks they are more likely (even if not strictly more likely) should care more about s-risks.2

If you do prioritize preventing s-risks over ensuring the immense value that we would fail to realize in the case of extinction, some specific donation recommendations apply. Otherwise you can learn more about our recommendations for other cause areas in our guide.

Effective Altruism Foundation

So far we are the only organization specifically dedicated to reducing risks of astronomical suffering. At the same time it has proven difficult to identify specific interventions and organizations where we can be confident that they will reduce such risks. That’s why we believe our work is still the best bet for anybody with a focus similar to ours. Future research insights on how to best prevent s-risks will most likely be the result of research efforts, exchanges, and collaborations that we initiate.

Donate to the Effective Altruism Foundation

EAF Fund (formerly „REG Fund“)

In 2018 we started the EAF Fund with the explicit mission to reduce risks of astronomical suffering through grants to other charities or individuals. Currently, we expect grants to be made in the following priority areas: decision theory research, fail-safe AI architecture, theory and history of conflict, macrostrategy, global cooperation and institutional decision-making, and moral circle expansion. So far we have made two grants, one to Rethink Priorities and one to Daniel Kokotajlo. Although the fund’s mission is to address s-risks, there are two reasons why we think donations to the Effective Altruism Foundation itself are more valuable right now: (1) Since we committed ourselves never to use the fund to support our own work, donations to EAF itself are more flexible. For instance, we can decide to commit a portion of our budget to the fund. (2) We think what’s most needed right now is additional research to figure how to best reduce s-risks, as opposed to funding for more specific interventions.

Donate to the EAF Fund (DE, CH, NL) Donate to the EAF Fund (US, UK)

Note: Donations to the EAF Fund can be matched 1:1 as part of this matching challenge. Such doubled donations to the fund are likely more impactful than a simple donation to EAF. In the context of the matching challenge we still refer to it as the „REG Fund“, its former name.

Machine Intelligence Research Institute

Research carried out by the Machine Intelligence Research Institute (MIRI) is particularly valuable from our perspective. Their work on agent foundations is an approach to AI alignment that is most likely to lead to safeguards against the very worst risks from AI development. We also believe they have considerably more room for more funding than other organizations in their field. So if you favor giving to a specific organization over giving to funds, we recommend giving to MIRI. Otherwise we think giving to the EAF Fund is better since such a donation is more flexible.

Donate to the Machine Intelligence Research Institute


1 We have given reasons for prioritizing suffering before. Brian Tomasik has also argued for a similar view. However, many people reject this position (e.g. Greaves, Sinhababu).

2 For reasons outlined by e.g. Ben West and Paul Christiano, we think very good futures are more likely. At the same time, we think the probability of very bad futures is only lower by a factor of about 100, but as Althaus and Gloor have argued they’re by no means negligible.


Newsletter

Wir halten Sie über unsere Aktivitäten und aktuelle Entwicklungen auf dem Laufenden.

 

Angebote

Talk to Us

Du kannst mit uns Fragen zu Philosophie und Anwendung des Effektiven Altruismus diskutieren. Wir freuen uns, uns mit engagierten Mitgliedern der Community zu den wichtigsten Überlegungen auszutauschen.

Anfragen

Spendenberatung

Du kannst dich von uns beraten lassen, wenn du eine große Spende planst. Wir helfen im individuellen Gespräch dabei, die Wirkung von Großspenden zu maximieren.

Anfragen

Aktiv Werden

Auf unserer Plattform zum Effektiven Altruismus findest du alle wichtigen Informationen, um mehr über die Community zu erfahren, effektiv zu spenden und die eigene Karriere zu planen.

Mehr erfahren