The Hidden Dangers Of Silicon Valley's Philosophical Trend, 'Longtermism'

Most people would agree that it's a good idea to think about how our actions affect not only the present but also future generations. And we don't just mean one's own family, community, city, or country, but all people everywhere: humanity the species. Take climate change, where survival and quality of life require getting over the petty short-sightedness of quarterly dividend yields and this year's hot-button item of political expediency. We might not see Manhattan go underwater during our lifetimes, it's true, and there'll always be "I've got mine, Jack" naysayers. But provided we minimally hinge our empathy on the old tried-and-true "think of the children" argument, we might craft a better future. 

Advertisement

That's "longtermism" in a super simplified nutshell. Part philosophy, part ethical creed, part brainchild of Oxford-led professors and Silicon Valley tech giants, longtermism asks people to think beyond the present — way, way beyond it, as The New York Times explains. We're talking past the next 10 years, 100 years, 1000 years, or even a million years, to all the trillions of people who might be born to a humanity that lives and thrives far into the future. If that sounds ridiculous, or it's hard to believe that our ape race could survive that long, you're not alone. 

Such apocalyptic presumptions are baked into longtermism's DNA, deep within its darker undercurrents. Behind high-minded thinking lies unnerving implications, especially if those with wealth and power presume to know what's best for everyone.

Advertisement

Altruism, by the numbers

Longtermism has roots in "effective altruism." If you've ever said to yourself, "Maybe I shouldn't give money to that one unhoused person, but to an organization that helps unhoused people," then you already understand effective altruism. Effective Altruism describes the importance of being smart about where and how we help others rather than just tossing money around willy-nilly. Like longtermism, effective altruism took root at Oxford University. The ideology has yielded organizations like the Center for Effective Altruism, and such organizations have given at least $46 billion in funding to the cause, per 80,000 Hours.

Advertisement

Longtermists, concerned with humanity's survival, also focus on effective choices. Think tanks like Oxford University's Global Priorities Institute (GPI) wrote in "The Case for Strong Longtermism," "Our concern is with relatively weighty decisions, such as how to direct significant philanthropic funding." In his book, "What We Owe the Future," GPI member William MacAskill defines longtermism using three parameters: significance, persistence, and contingency. In order, these mean, "How impactful is a current decision on the future?"; "How long will the consequences of such a decision last?"; and, "How likely is it such a decision could come again?" 

This all sounds pretty reasonable. However, longtermism also tends to overlook poverty, oppression, and suffering so long as it doesn't cause our extinction. As Aeon discusses, longtermist and founder of Oxford's Future of Humanity Institute (FHI) Nick Bostrom has characterized everything from AIDS to a nuclear holocaust as "a giant massacre for man" but "a small misstep for mankind." 

Advertisement

Think tanks and echo chambers

At its most extreme, longtermism gives rise to the same "the ends justify the means" moral fallacy that's been critiqued in countless stories, books, films, etc. So a glorious, shining future of billions necessitates the deaths of motley millions? "Well ok, then," a longtermist might conclude.

Advertisement

Such detached intellectualism comes across in longtermist literature, which is overflowing with technical jargon and a bizarre obsession with numerical prediction. "The Case for Strong Longtermism" talks about "benefit ratio (BR) and ASL (axiological strong longtermism)," "influencing the choice among non-extinction persistent states," "ambiguity aversion," and more. When talking about how humanity's population might grow as we migrate off-world, the article says, "Even a 50% credence that the number of future beings will be zero would decrease the expected number by only a factor of two. In contrast, a credence as small as 1% that the future will contain, for example, 1 trillion beings per century for 100 million years (rather than 10 billion per century for 1 million years) increases the expected number by a factor of 100."

Advertisement

Lots of outlets have called out longtermism for not being "ethically sound," as The Washington Post says, including Salon, Aeon, Current Affairs, The Intercept, and more. And you might be thinking, "Okay, a bunch of stuffy academics with their heads up their butts have some wacky ideas. And?" The problems start when longtermism spills out into the real world and gets co-opted by those with influence, like the billionaires of Silicon Valley.

The obsessions of billionaires

The billionaires of Silicon Valley have latched onto longtermism in a big way. On Twitter, Elon Musk professed his love of William MacAskill's book, "What We Owe the Future," saying, "Worth reading. This is a close match for my philosophy." And what is that philosophy? It's hard to say with a guy with Musk, but Salon quotes him as saying that humanity must "preserve the light of consciousness by becoming a spacefaring civilization & extending life to other planets." Hence his preoccupation with going to Mars and the implication that Earth is doomed.

Advertisement

Skype founding engineer Jaan Tallinn is also a proponent of longtermism. He founded the Center for the Study of Existential Risk at the University of Cambridge, which studies, amongst other things, the dangers of artificial intelligence (AI). To a longtermist, existential risks are the biggest threat to humanity, as Vox explains. These are game-stoppers that could completely obliterate humanity and wipe away all future lives. 

On a similarly apocalyptically-themed note, Musk's old Paypal co-founder Peter Thiel is a climate-change-denying libertarian who built an "apocalypse retreat" in New Zealand, per Current Affairs. And he's not the only billionaire to do this. The New Yorker quoted LinkedIn co-founder Reid Hoffman as saying that "fifty-plus percent" of Silicon Valley billionaires have an "apocalypse insurance" in the form of a refuge. All in all, such billionaires seem strangely fixated on human annihilation. And as longtermists believe, today's annihilation just might be tomorrow's paradise.

Advertisement

The transhumanist connection

There's another even stranger layer to Silicon Valley's preoccupation with longtermism, and it connects to Silicon Valley being at the forefront of technological innovation. Specifically, it deals with transhumanism, the belief that humans will fuse with machines and become ultra-humans. Cyborg limbs, neural implants, consciousness uploads: This is how to transcend the definition of "human" and embrace some grand, tech-facilitated utopia. Slate, contrarily, has described transhumanism as a kind of neo-eugenics that could split humanity even further into haves and have-nots, while an article on MedCrave describes transhumanism as a form of "digital slavery" where humans are even more beholden to the whims of tech superpowers.

Advertisement

Nick Bostrom, the Oxford professor Salon calls the "father of longtermism," has written a transhumanism-extolling manifesto called "Transhumanist Values." Salon also describes how Bostrom has expressed concern at the overbreeding of less-than-talented, less-than-smart people. "If such selection were to operate over a long period of time, we might evolve into a less brainy but more fertile species," he wrote in "Existential Risks." Transhumanism, he says, is the solution to such a problem. According to Futurism, Elon Musk has expressed similar desires and wants to "jump-start the next stage of human evolution" with his "brain hacking" tech, Neuralink.

When taking into context Silicon Valley's fear of human extinction and longtermism's emphasis on the future over the present, a very creepy and alarming picture emerges. Longtermism becomes the vehicle for the wealthy, powerful, and tech-minded to sidestep ethics and presume to know what's best for everyone.

Advertisement

The value of a life

In the end, what does this all mean? Don't worry — we're not implying that tech billionaires are architecting the demise of the globe's pesky poor and that they'll hide from existential risks in their apocalypse bunkers before rocketing off the dustbowl of a climate-ravaged Earth in gleaming cyborg bodies. That's practically 2013's "Elysium." 

Advertisement

Ultimately, The Washington Post's Christine Emba says it best: "The turn to longtermism appears to be a projection of a hubris common to those in tech and finance, based on an unwarranted confidence in its adherents' ability to predict the future and shape it to their liking." Furthermore, "focusing on the future means that longtermists don't have to dirty their hands by dealing with actual living humans in need, or implicate themselves by critiquing the morally questionable systems that have allowed them to thrive. A not-yet-extant population can't complain or criticize or interfere, which makes the future a much more pleasant sandbox in which to pursue your interests." Additionally, author Alex Lo on the South China Morning Post states that it's all but impossible to be responsible for some 80 trillion future lives. We would add: people can't even manage their weekend plans. Besides, how belittling is it to think that future humans can't sort their business?

Advertisement

But is there anything to longtermism besides the sanctimoniousness of academia's hallowed halls, or the megalomania of the uber-wealthy? Maybe there's a reason we keep coming back to the parent-child bond — real long-term survival just might reside there.

Recommended

Advertisement