Die Vordenker im Silicon Valley reden gerne von einer besseren Welt. Ihre Zukunftsvisionen versprechen das ewige Leben, die Heilung aller Krankheiten, die Besiedelung des Weltraums und die produktive Verschmelzung von Mensch und Maschine. Nichts scheint unmöglich, wenn man groß genug denkt und das nötige Kapital mitbringt. Das Narrativ des Silicon Valley: Wir wollen Gutes tun für die Menschheit, nicht nur heute, sondern langfristig gedacht.
Die philosophische Grundlage finden die Unternehmer seit einiger Zeit im „Longtermism“, zu Deutsch etwa „Langfristigkeitsdenken“, einer Denkrichtung, die den Menschen in einer fernen Zukunft mindestens genauso viel Wert beimisst wie den Menschen der Gegenwart. Longtermisten sehen unsere Spezies am Scheideweg: Wenn sie sich nicht selbst auslöscht, könnte ihr eine Milliarden Jahre lange Zukunft bevorstehen, in der sie den Weltraum kolonialisiert und irgendwann als digitale Kopie ihrer selbst fortbesteht. Um diese glorreiche Zukunft zu ermöglichen, müssen Bedrohungsszenarien für die Menschheit frühzeitig aus dem Weg geschafft werden. Oxford-Philosoph Nick Bostrom, Vordenker des Longtermism, definierte schon Anfang des Jahrtausends die „existentiellen Risiken“ wie die Vernichtung durch einen Atomkrieg, Asteroideneinschläge oder eine schlecht programmierte Superintelligenz.
Musk: „Große Übereinstimmung mit meiner Philosophie“
Bostroms Kollege in Oxford, William MacAskill, machte die bisher randständige Denkrichtung mit seinem Bestseller „What we owe the future“ (deutsch: Was wir der Zukunft schulden) im Sommer 2022 einer breiteren Öffentlichkeit bekannt. Auch über seinen reichweitenstarken Account auf der Plattform X gab Elon Musk eine Leseempfehlung ab: Longtermism habe eine „große Übereinstimmung mit meiner Philosophie“.
Musk ist nicht der einzige, wohl aber der wirkmächtigste Techmilliardär im Dunstkreis des Longtermism. Es ist eine Symbiose, die passt, nicht nur weil Musk und die Longtermisten eine Schwäche für den Buchstaben X teilen. Der reichste Mann der Welt spendete Geld für das Future of Life Institute, mitgegründet von Skype-Erfinder Jaan Tallinn, oder das Future of Humanity Institute des Oxford-Philosophen Bostrom. Mit seinen zahlreichen Unternehmen arbeitet Musk genau an den Baustellen, die MacAskill und Bostrom im Sinn haben: der Vernetzung von Mensch und Computer, der Erforschung Künstlicher Intelligenz und der Kolonialisierung des Weltalls. Als bekennender Science-Fiction-Fan empfiehlt Musk gerne die „Foundation“-Reihe von Isaac Asimov, in der ein Mathematikprofessor berechnet, wann das galaktische Reich untergehen wird und wie sich kommende Zivilisationen schützen lassen. Asimovs Romanreihe, die schriftstellerische Blaupause für das Programm des Longtermism, soll Musk zur Gründung seines Raumfahrtunternehmens SpaceX inspiriert haben, das 2028 die ersten Menschen zum Mars schicken will.
Isn't it good to reach for the stars? Critics of long-termism ask who will actually be allowed to explore foreign galaxies at some point. Are they just (influential) elites, while all the rest have long since perished from famine and climate-related natural disasters? Longtermism and its advocates are accused of being blind to real and current problems. To put it bluntly: Why donate to Welthungerhilfe today and save a few million children when you can enable billions of children who have not yet been born to live on Mars?
Is the climate crisis an “existential risk”?
From the perspective of long-termists, present-day catastrophes appear minor. MacAskill and his Oxford colleague Hilary Greaves write poetically: “If human history were a novel, we would be on the very first page.” Émile P. Torres, postdoc in philosophy at Case Western Reserve University in Cleveland and once a long-termist himself have fallen into disrepair, the outlook for the future is now viewed much more bleakly. “From a cosmic perspective” and in the bigger picture, Torres writes, even “a climate catastrophe that reduces human civilization by 75 percent over the next two millennia is no more than a blip – like a nonagenarian dealing with stubbed his toe two years ago.”
It is not as if the climate crisis does not feature in the writings of the Longtermists. However, the thinkers from Oxford do not seem to see an “existential risk” here. In his bestseller “What we owe the future,” MacAskill lectures about a climate-technical worst-case scenario in 300 years in which humanity has burned all fossil energy. Even an increase in the global average temperature of 7 to 9.5 degrees compared to the pre-industrial era does not necessarily lead to a “civilizational collapse”. Although climate change is “bad for agriculture in the tropics,” richer countries can adapt and “temperate regions would emerge relatively unscathed.” And what about famines, refugee movements, distribution struggles? “Most conflict researchers” are of the opinion that climate change plays a minor role in this regard – in complete contrast to “economic growth”. It is the fuel that will take humanity to the stars. The real existential risk for MacAskill: stagnation.
So is long-termism the utopian counter-movement to radical climate protection, which, when in doubt, prefers saving the planet to saving humanity? Torres warns against misunderstanding the trendy philosophy from Silicon Valley as the preserver of our civilization. The “trick” lies in the “idiosyncratic” definition of what constitutes being human, says Torres in an interview with the FAZ: “Our species could die out in two years. As long as it is replaced by post-human successors, such as artificial intelligences, the extinction of humanity has not happened for the long-termists.”
Longtermism fits the capitalist growth narrative
Longtermism borrowed one of its basic principles from utilitarianism, a moral philosophical movement from the 18th century. An action is good if it creates the greatest possible benefit for the greatest number of people. The thought experiment of the “trolley problem” is well known: a tram threatens to run over five people if the switch is not changed. But if it is changed, the life of a single person on the other track will be at risk. How would you decide? The utilitarian would switch gears to sacrifice one life for the sake of five other lives. The Longtermist is now expanding the thought experiment to an unimaginable number of people who may be on the rails in the distant future – and, in case of doubt, changes the course so that a few million people who populate the earth today have to believe in it.
Perhaps Musk also finds long-termism charming because the principle “more is better” can be perfectly translated into a capitalist growth narrative. “Imagine getting a birthday present,” Torres says. “It either contains a bomb or eternal life.” The Longtermist takes this risk. With this logic, it makes sense, on the one hand, to warn about the extinction of humanity by super-intelligent AI and, on the other hand, to enter into a billion-dollar arms race for AI chips. In this logic, people believe that the high-performance computers of the future will solve the climate problem and ignore the fact that huge data centers all over the world are already contributing massively to the overexploitation of the planet. When all lithium reserves have been used up and the last liter of water has been used to cool servers, the tech giants simply take off for Mars or upload their consciousness to the cloud.
It was not always the case that the protagonists of longtermism were only interested in the long term. But the idea that a few wealthy people decide on the basis of cold calculations which investments in civilization are worthwhile and which are not has been around for a long time. Longtermism emerged from the “Effective Altruism” of the Oxford philosophers MacAskill and Toby Ord. The idea: Charity can be optimized if you have the right job and donate to the right organization. What matters is not so much the amount of the gift, but rather the recipient.
When it comes to effective altruism, the scientific approach is definitely preferable to moral considerations. New books and more teachers don't help get children in developing countries into school, but deworming treatments do. Instead of paying $50,000 to train a guide dog, the money would be better spent on operations to cure hundreds of blind people. Trained entirely in utilitarianism, effective altruism wants to help as many people as possible – or animals, because there are more of them, especially the very small ones. 400 billion shrimps saved promise an excellent result.
Musk finally wants to go to Mars with Trump
Effective Altruism fell from grace when its most prominent representative, crypto trader Sam Bankman-Fried, was arrested in late 2022 on suspicion of fraud and money laundering. Further allegations arose, and several women turned to the media and reported sexual harassment in the Effective Altruism environment. Bostrom had to apologize for a decades-old email in which he wrote: “Black people are stupider than white people.” In the spring of 2024, his Future of Humanity Institute, which was co-financed by Musk, closed, the reasons remained opaque. Bostrom spoke of a “death of bureaucracy”.
For Musk, the allegations surrounding the thought movement from Oxford are probably nothing more than a last gasp of the “woke mind virus” that will soon be overcome. The entrepreneur has set himself the goal of abolishing excessive bureaucracy and giving free rein to unbridled inventiveness. The enemy is overregulation, which, among other things, prevents humanity from becoming “a multiplanetary civilization,” Musk wrote shortly before the US election. Voting for Donald Trump also means “voting for Mars.”
The rest is known. At his inauguration, Trump promised that the United States would “plant its flag on Mars.” His supporter Musk is keeping his fingers crossed, he couldn't be happier. In the future, he will sit at the controls of power as the President's whisperer. It should make the state more efficient and push back in areas in which it is active with its companies: mobility, space travel, communication.
Nevertheless, Musk's bet on Trump is one with an uncertain outcome, also because his goals, in contrast to those of long-termism, are often of a short-term nature. In particular, the Tesla boss is unlikely to like his protectionist trade policy and tariff threats against China – the Middle Kingdom is one of the most important car sales markets. How long the “bromance” between Trump and Musk lasts remains to be seen. From the perspective of long-termism, Trump's presidency is just a small step on Musk's path to a glorious future for our species.