Diese Sammlung von Artikeln ist zuerst auf der Website von Sentience Politics erschienen. Sie ist aktuell nur auf Englisch verfügbar.
Most charities focus their work on helping individuals who already exist, while few charities try to explicitly benefit individuals who will live in the future. Although it is understandable that people empathize primarily with currently existing individuals, this raises fundamental ethical questions: Is there a rational justification for the ethical disregard of not-yet-existing individuals? How important are future individuals compared to currently existing individuals?
Future individuals count for just as much
Future individuals differ from present individuals only in one property – the time they live in. But why should we consider this property to be ethically relevant? From an impartial perspective, the fact that we live in a certain time does not grant this time any special ethical importance. For example, equally intense suffering doesn’t feel better or worse depending on the time it is experienced in. If we ground our activism in concern for the wellbeing of others, whether these “others” live in the distant future, or whether they are suffering presently, should not make a relevant difference. Sentient beings in the future deserve equal moral consideration – disadvantaging them is an unjustified form of discrimination.
We perceive time and space in very distinct ways, but modern physics combines both concepts into four-dimensional spacetime. This can lead to an interesting change of perspective: The fact that time and space are interwoven and essentially similar suggests that there is no fundamental difference between temporal distance and spatial distance. According to this perspective, affecting a temporally distant individual in the future is similar to affecting a spatially distant individual. If spatial distance is ethically irrelevant, then this consideration suggests that temporal distance is also ethically irrelevant.
Why the future is important for effective altruists
Aid interventions are typically designed in terms of years or decades. But all the individuals existing in the current decade are vastly outnumbered by the individuals that will exist in the decades, centuries, millennia etc. to come. If we consider only the next thousand years, the ratio “present decade : future decades” is already 1:100. And yet, in relation to cosmic timescales, a thousand years are almost nothing, which means that future individuals might outnumber presently existing ones by an even larger factor.
This suggest that ensuring suffering-free outcomes for the distant future might be a top priority for effective altruists: If there are ways to affect outcomes for millions of years or longer, then these long-term effects of our actions would most likely be much bigger than everything else we could achieve in the present.
Of course, we should consider that short-term interventions too can influence the future in the form of spillover effects. For example, improvements of animal welfare laws benefit nonhuman animals in the present, but they might also contribute to long-term societal change. Nevertheless, given that people’s altruistic focus has largely been on the current decade, we should consider directing our efforts and attention more strongly towards the (very) long-term effects of our actions.
It might be argued that what happens in the distant future is highly uncertain and unpredictable, and that we should therefore just focus our attention on the short-term. However, since short-term interventions also influence the future, we can hardly avoid thinking about long-term effects – the long-term effects might be more impactful even if an intervention is directed towards helping present individuals. Thus, the future being uncertain does not imply that we can ignore the long-term effects of our actions, unless we think the future is completely unpredictable. But even though forecasting is difficult, no one in their right mind would think that it is impossible to get at least some of the things right: We only need to do slightly better than random chance in order to make a big difference.
The importance of emerging technologies
In the past, the emergence of new technologies often led to radical transformations and thereby shaped human history. For example, industrialisation led to massive social change and largely determined the course of the 19th and the 20th century. In general, emerging technologies endow humanity with unprecedented power, which involves both positive opportunities and severe risks. Technological developments enabled unprecedented medical success, but they also led to chemical and nuclear weapons.
The historical analogy suggests that the future will also be shaped (to a large extent) by new technologies. If technological progress persists, then it is plausible that humanity will develop very powerful technologies in the future, technologies that might seem just as incomprehensible to us now as cell phones would have seemed to people living in medieval times. Due to the potentially game-changing impact on the future, it is crucial to ensure that emerging technologies are going to be used in a responsible way.
For example, many experts consider the emergence of smarter-than-human artificial intelligence quite possible – with potentially vast consequences. Intelligence is the key instrument for shaping the world. Therefore, the emergence of smarter-than-human intelligence could be tremendously consequential, just as the increase in intelligence from chimpanzee level to human level was very consequential (it led to the emergence of human civilization).
Future technologies could cause astronomical suffering
In particular, the historical perspective also demonstrates that new technologies can cause vast amounts of suffering. Even if technological developments altogether benefit humanity, they might still affect nonhuman sentient beings negatively. For example, industrialisation has improved human living conditions, but it has also led to factory farming and thereby multiplied human-caused animal suffering – not due to human malice or bad intentions, but simply because it became technically feasible. Similarly, future technologies might lead to a moral catastrophe dwarfing all previous ones in its scope.
Predicting future technological developments is difficult, but analyzing some possible scenarios might still offer valuable insight. Which technological developments involve serious risks of astronomical suffering?
Currently, all sentient beings are biological animals. This might change: It is possible that digital systems of the future will develop sentience if they are programmed in a particular way. There is a lot of uncertainty and debate about how sentience arises, but it seems plausible that digital sentience is at least theoretically possible. For example, if every neuron of a sentient biological brain is replaced by a functionally equivalent computer chip, then the resulting structure is presumably still sentient.
It remains to be seen whether there will actually be digital sentience in the future, but if sentient digital beings will come into existence, then they, too, deserve moral consideration. Privileging biological beings would be an ethically unjustified form of discrimination based on superficial characteristics (the substrate on which a being is implemented). If digital minds emerge, will society realize their ethical importance and care sufficiently about their potential suffering? In contrast to biological animals, digital sentience would presumably be very unfamiliar and abstract (for example, an algorithm running on a computer chip), making empathy much more difficult. Additionally, it is possible that society might fail to recognize the existence of digital sentience – just as many people failed to recognize the sentience of nonhuman animals in the past.
Given the current treatment of nonhuman animals and these additional obstacles, it is non-obvious that society will care about digital sentience to a sufficient degree. If the new technologies are economically useful, it is possible that large numbers of suffering digital beings will be created in humanity’s service. It is even possible that the future will contain far more digital beings than biological beings. The combination of large numbers of sentient beings and the foreseeable lack of concern for their suffering could lead to a moral catastrophe of unprecedented scope.
Risks of space colonization
On a cosmic scale, Earth is a tiny point in a vast universe containing hundreds of billions of galaxies. Our own galaxy, the Milky Way, already contains at least 100 billion planets. For the time being, Earth appears to be the only inhabited planet in an otherwise empty universe, but our descendants might well decide to colonize other planets in the future. While this is currently not economically feasible, more advanced technologies might render space colonization possible.
Space colonization is sometimes associated with unlimited possibilities in a utopian future, but we should also be concerned about the dangers: Without precautionary measures, spreading life throughout the universe would presumably increase the total amount of suffering many times over. For example, humanity might spread wild animals or digital sentience to other planets, thereby multiplying suffering. In general, space colonization would vastly increase the number of sentient (possibly digital) beings. The enormous size of the universe suggests that the stakes are astronomical, which means that reducing the risks of space colonization could be very important.
It might be argued that such scenarios are unlikely and should thus be dismissed. However, these scenarios are more realistic than they might seem at first glance: SpaceX is working on concepts for Mars colonization and it is already possible to control the movements of a cockroach using an implanted computer chip, which means that there are already some primitive semi-digital beings. Moreover, such scenarios only serve as illustrative examples of astronomical suffering, not as exact predictions of the future – emerging technologies might unfold in many different ways. While any particular future dystopian scenario may appear to be unlikely indeed, the historical analogy with factory farming suggests that it is by no means implausible that new technologies can lead to moral catastrophes of unprecedented scope. Thus, it pays to spend time thinking about the things that could go wrong, and about ways to ensure positive outcomes.
The potential scope of a moral catastrophe involving advanced future technology implies that its prevention should be a top priority, even if its probability of occurrence were rather small. By multiplying the scope of the potential damage with the probability of occurrence, we get the expected value, which is commonly regarded as a useful measure of the seriousness of the risk. This means that if the potential damage is sufficiently large, then the risk should be taken serious even if the probability is small. In the case of future technologies, the expected value is large because the potential damage is astronomical and the probability of occurrence is – as suggested by the historical analogy – far from negligible.
How can we influence the future positively?
All the arguments above would be useless if it’s impossible to come up with ways to predictably influence the distant future. If the effects of our actions diluted over time in a random fashion, we could bury our ambitions to ever achieve lasting change. Fortunately, there are a few things we could focus on that have a chance of generating a long-lasting impact. We just have to identify the right leverage points. The following approaches point out promising candidates:
We might work directly on mitigating the risks of impactful technologies by developing and implementing precautionary measures. In particular, the possible emergence of smarter-than-human artificial intelligence is of central importance, as it could potentially shape the entire future: An artificial intelligence with stable goals and drives towards self-preservation will work on accomplishing its goals for possibly millions of years to come. Hence, shaping the goals of such an intelligence, should it ever arise, is a crucial historical bottleneck that allows us to have a lasting impact on the distant future.
Apart from new technologies, human history was also shaped by ideas and ideologies. Therefore, improving social values and norms constitutes another potential leverage point. Spreading beneficial values would increase the probability that future generations will use their power responsibly – even if we cannot accurately predict future developments. For example, promoting antispeciesism now improves the attitudes of future generations and presumably reduces animal suffering in the future. We could also try to reduce future digital suffering by spreading concern for digital sentience at an early stage.
Because only few people have thought about how to best prevent suffering in the distant future, further research on this question likely has a lot of value. The astronomical stakes imply that the value of information of reducing our uncertainty is huge. It is likely that we end up with different, or at least refined practical conclusions, if much more research were to be put into these questions.
Future individuals count just as much as individuals who already exist, and the former likely outnumber the latter by several orders of magnitude. Therefore, we should focus on the long-term impact of our work, instead of (merely) optimizing for short-term success. History suggests that emerging technologies are pivotal as they equip humanity with unprecedented power – which is not always used responsibly. Considerations from the ethics of risk imply that we should take possible scenarios of astronomical suffering very serious. In light of these insights and the inherent uncertainty of future developments, we need to strategically reconsider our activism.
Imagine it’s the year 2116 and humans – should they still exist – look back on the most important developments people were working on in 2016. What might they wish should have been done differently? It is very difficult to come up with good answers, but it seems unlikely that reducing the consumption of animal products in the year 2016 would be the top priority. Instead, the answer probably has something to do with trajectory-changing new technologies and value spreading (which could still include vegan outreach). In any case: By taking the long-run perspective, activists can hope to achieve a bigger and longer-lasting positive impact.