Future Humans May Have Marc Andreessen To Thank
Longtermism is the accelerationist philosophy at the heart of Silicon Valley
This is the second part of a series using Marc Andreessen's “Techno-Optimist Manifesto” as a text to understand the motivations of Andreessen, his venture fund a16z (which has fingers in most of the internet and greatly impacts online business), and Silicon Valley at large. There will be at least one more part. You can read the first article here.
The opening of Marc Andreessen's “Techno-Optimist Manifesto” reads like a religious screed. He is “here to bring the good news,”1 a prophet we didn’t ask for but maybe the prophet we deserve. Most statements in the manifesto start with “We believe.” There are no data, and few concrete references beyond some random quotes2 and the list of "patron saints" that concludes the document.
The manifesto is an article of faith.3
If Andreessen is writing about his beliefs, where do those beliefs come from? He has financial incentives, surely, like those to promote war and raise money from investors for his fund. Andreessen's manifesto also points fervently to a utilitarian philosophy that has taken over many of the minds of Silicon Valley: longtermism.
What is Longtermism?
Longtermism is an outgrowth of the effective altruism movement, a philosophical approach to philanthropy that encourages people to maximize their giving by leveraging utilitarian calculations that demonstrate where resources will have the greatest effect. The goal is to do the most good. A simple goal, and one that is hard to argue against on its surface.
The practical results of effective altruism have varied. The same philosophy has supported buying inexpensive mosquito nets to prevent malaria in millions of people, encouraged privileged school-smart folks like Sam Bankman-Fried to enter lucrative professions and commit fraud, and argued for maximizing donations to already wealthy countries because those inhabitants are most likely to have innovative and productive lives.4
Effective altruists believe that they are the ones who can calculate the difference between right and wrong. A self-selected, non-elected, wealthy group of largely white, Western men. Men like Andreessen.
Longtermism expands on this "do the most good" ideology and extends it to the idea of future humans. The philosophy went mainstream in 2022, backed by a powerful PR campaign, with the publication of What We Owe The Future. The author, Oxford philosopher William MacAskill, is the dominant spokesman for effective altruism. In the book, he describes longtermism as"the idea that positively influencing the longterm future is a key moral priority of our time."
MacAskill believes that we are at a unique moment in humanity's existence. Due to the rapid pace of technological development, it's possible that certain values and ways of being will be "locked in" and become essentially unchangeable for eons. This may be due to Artificial General Intelligence that could codify information for millions of years, or even takeover society and replace humans as the creators of culture. The thrust of longtermism is that the decisions we make and actions we take now may have an outsized impact on the future of humanity.
Longtermism is not the first attempt to consider future impacts of present actions, and MacAskill references the oral constitution of the Haudenosaunee Confederacy as an example of this thinking, writing: “[t]he Gayanashagowa...has a particularly clear statement. It exhorts the Lords of the Confederacy to ‘have always in view not only the present but also the coming generations.’”
Trying to improve outcomes for future humans necessarily would, for MacAskill, mean managing climate change, avoiding engineered pathogens and pandemics, and keeping Artificial Intelligence in service of humans so we can become smart enough and develop our technology enough to colonize the light cone with our consciousnesses and live forever.
Think Piece is a reader-supported publication. To receive new posts and ensure the longevity of this work, consider becoming a paid subscriber.
Longtermism is specifically focused on an extremely long time horizon, like millions of years, not just what we can remotely imagine. On that timeline, there are so many possible humans that they will necessarily move to the stars, potentially only as digitized minds. As the New Yorker reported, Nick Bostrom, a prominent longtermist philosopher also at Oxford, believes:
…that if humanity successfully colonized the planets within its “light cone”—the plausibly reachable regions of the universe—and harnessed the computational power of the stars to run servers upon which the lives of digital consciousnesses might be staged, this could result in the efflorescence of approximately ten to the power of fifty-eight beings.
Andreessen, too, believes that “our decedents will live in the stars.”
There is a certain hubris required to act in accordance with longtermism that seems to appeal to techlords like Andreessen: what you do now can reverberate throughout the universe for the remainder of humanity's existence. That's some big impact, and probably helpful in justifying those excessive compensation packages.
A Moral Philosophy For The Tech Elite
It's not surprising that longtermism has found a home amongst the elite5 of Silicon Valley: it explicitly supports tech accelerationism as a moral good.
Accelerationism is an ideology that calls for rapid capitalist economic and technological growth in order to disrupt and fundamentally change society. There are right wing and left wing accelerationists, however the most prominent modern scholars are primarily right wing, like Nick Land who is mentioned as a "patron saint" of techno-optimism in Andreessen's manifesto. Andreessen has “e/acc” in his X bio at the time of publishing, which stands for “effective accelerationist,” the wording a play on effective altruism.
Longtermism cannot be separated from accelerationism, because the philosophy is predicated upon the importance of population growth and technological advancement. MacAskill is very concerned about technological stagnation, since it could lead to population decline or even human extinction. He writes: “to safeguard civilisation, we therefore need to make sure we... reach a point where we have the technology to effectively defend against such catastrophic risks.”6
The “Techno-Optimist Manifesto” shares this fear:
We believe not growing is stagnation, which leads to zero-sum thinking, internal fighting, degradation, collapse, and ultimately death.
There are only three sources of growth: population growth, natural resource utilization, and technology.
Developed societies are depopulating all over the world, across cultures – the total human population may already be shrinking.
Natural resource utilization has sharp limits, both real and political.
And so the only perpetual source of growth is technology.
Silicon Valley is a natural ally for the longtermist cause, ready to form the future—and profit from it. MacAskill frames his discussion of AI and the lock-in concerns as inevitable due to the rate of technological development and thus something to be carefully managed. Someone like Andreessen can come along and see that potential inevitability as permission, viewing his personal benefit and his global impact as altruistic. As he writes in the manifesto, “[w]e believe any deceleration of AI will cost lives. Deaths that were preventable by the AI that was prevented from existing is a form of murder.” Andreessen is financially invested in AI, and can make the moral case for it, too.
However, Andreessen has critiqued effective altruism, calling out in particular it's tendency towards abstraction. Andreessen's wife, Laura Arrillaga-Andreessen, has taught philanthropy at Stanford and, according to Andreessen, proposes “a grounded version of effective altruism.” We can perhaps assume that is what he would claim for himself, since his wife appears to run any philanthropic pursuits that use Andreessen's money.
Andreessen also said that “[effective altruism] leads you into playing god and social engineering,'“ but based on his manifesto it's not clear that he has internalized that as a negative. Andreessen does precisely what he cautioned against, writing an entire manifesto that drips in religious language framing an existential fight for the future of humanity.
Andreessen may not identify publicly as an EA or longtermist, but in words and impact he is one. Elon Musk has also described longtermism as “a close match for my philosophy.” Technology as god has been the foundation of Silicon Valley for generations, and now it has a robust, well-funded philosophy to match.
What kind of world are MacAskill, Andreessen, and the minds deciding what technology gets funded trying to build? In the next installment, we'll read the Techno-Optimist Manifesto as a blueprint for Silicon Valley's brave new world.
Read The Series
This is an explicitly Biblical reference. Jesus is often considered the “good news,” as is what he represents: a new covenant with God and the resulting opportunity for salvation.
The Carrie Fisher one sticks out; it’s hard to imagine that she would be on his side!
I’m certainly not the only person to pick-up on this. Henry Farrell, a professor at Johns Hopkins, has a nice piece on the religious tone of the manifesto here and highlights how it is designed to separate true believers from those who oppose Andreessen’s vision for the world.
Nick Beckstead, a prominent EA who ran FTX's Future fund, wrote this in his PhD thesis. As far as I can tell this is a largely theoretical stance, but as we’ll see with the link between longtermism, pronatalism, and the “Techno-Optimist Manifesto” in the next installment, there is a prioritization of certain, um, kinds of people. (I can’t quite figure out who broke this information about the thesis; it’s been referenced so widely. If you know, please tell me so I can credit appropriately.)
One of the funnier things about Andreessen's manifesto is how he identifies the elites as existing elsewhere, out there, certainly not including himself as a billionaire.
MacAskill is also concerned about the existential risk of Artificial General Intelligence that turns on humanity, or as already discussed, “locks in” harmful values. Andreessen is critical of that viewpoint, which I’ll explore more in the next installment. But they both end up with the same conclusion: AI is happening.