O Brave New World That Has Such Machines In't
The technocratic future looks much like the present.
This is the third (and final, for now at least omg I need to stop trying to understand this man’s mind) part of a series using Marc Andreessen's “Techno-Optimist Manifesto” as a text to understand the motivations of Andreessen, his venture fund a16z (which has fingers in most of the internet and greatly impacts online business), and Silicon Valley at large. You can read part one here and part two here
“We have enemies.“ - Marc Andreessen, The Techno-Optimist Manifesto
Marc Andreessen, the powerful venture capitalist, would have you believe that the world is out to get you. But not the world of disease, or the world of war, but the world of a “demoralization campaign” through bad ideas like “tech ethics” and “stagnation.”
In the Techno-Optimist Manifesto, Andreessen’s section “Enemies” portrays a penchant for rigid duality where questioning technological development is tantamount to treason and moderation is nihilistic. It’s a kind of technological Manichaeism where the forces of good are aligned with the longevity of cognition, or the spirit, and the forces of evil aligned with the body and the material world that must be overcome by technology. And, like in gnosticism, the special knowledge required to fight evil is only found through direct transmission to the maligned enlightened, reinforcing the inaccurate claim to outsider status that we see in Andreessen's manifesto.
Andreessen’s future is one of “machines [that] work for us,” a colonized galaxy, and endless job growth. As he writes, “We believe the ultimate mission of technology is to advance life both on Earth and in the stars.” The longterm future of humanity is in the hands of wealthy technologists like Andreessen: what do they have in store for us?
Wait, Aren’t Effective Altruism And Effective Accelerationism Fighting?
If you spend any time on tech X, you will have seen the likes of Andreessen fomenting discord between accelerationism and effective altruism/longtermism. The differences, however, are overstated, and are largely aesthetic mores rather than material ones, leading as they do to the same outcomes that prioritize the same people.
The biggest point of contention is risk management for AI development: Andreessen includes “risk management” on his list of enemies in the manifesto, and wastes much digital ink making fun of EAs and their preoccupation with not killing all of humanity. However, when prominent EAs and their ilk, like Elon Musk, talk about managing risk, they come up with stupid ideas like a six-month moratorium on AI development. Six months is hardly a significant slowdown (and was largely a marketing ploy: this technology is so powerful that even six months will drastically change the world). They all still want to make AGI and, as I wrote here, EA/longtermist spokesman William MacAskill explicitly calls for an acceleration of technological development.1 They are on the same side.
Longtermism and accelerationism end up in the same place, even if they have different aesthetic reasons for trying to get there, and I find it largely useless to adjudicate any divergence. It's easy to get bogged down in the supposed differences in philosophy—and I believe this is a concerted effort on the part of Andreessen and his cohort to separate themselves from the now-tainted EA movement—but the ends are identical, even if the means vary.
The technocratic worldview that stems from EA, longtermism, and accelerationism has three foundational beliefs:
Death is bad
Smart people are better
Expansion is best
Death Is Bad
In the manifesto, Andreessen writes: “Our enemy is deceleration, de-growth, depopulation – the nihilistic wish, so trendy among our elites, for fewer people, less energy, and more suffering and death.”
The fear of death is at the core of Andreessen's manifesto. He writes multiple lines about the fight against nature over which humans naturally have dominion, nature which is trying to kill us: “…this is why we are not still living in mud huts, eking out a meager survival and waiting for nature to kill us.”
But it's notable that Andreessen, as I explored here, is an avid investor in defense tech, including defense AI that is being used in Israel as I type. It’s ok for some people to die. People who are against America and her allies.2 People who don't ascribe to Andreessen's worldview, maybe (see his vitriol in the manifesto for those who disagree with him).
So death is something to be overcome for those who deserve it. This is a well-known preoccupation of the wealthy, particularly those in tech.
In What We Owe The Future, MacAskill discusses the anti-aging market, writing that “wealthy techno-optimists have provided hundreds of millions of dollars in funding for biomedical R&D companies aiming to achieve indefinite life spans,” including both Jeff Bezos and Peter Thiel. MacAskill seems sympathetic to the cause, writing this chilling clause: “Even if aging cannot be cured in our lifetime…”
These sci-fi vanity projects tell us more about the technocratic worldview than just about anything else: it’s more important to these men to spend hundreds of millions of dollars to have younger penises than it is to oh I don’t know solve world hunger or some such silly thing. We have to cure aging and stop death, but only for some people.
Which people, you ask?
Smart People Are Better
A hyperfocus on the future changes action incentives in dangerous ways. As prominent effective altruist Nick Beckstead, a philosopher who ran FTX's Future Fund, wrote in his PhD thesis:3
…saving lives in poor countries may have significantly smaller ripple effects than saving and improving lives in rich countries. Why? Richer countries have substantially more innovation, and their workers are much more economically productive. By ordinary standards—at least by ordinary enlightened humanitarian standards—saving and improving lives in rich countries is about equally as important as saving and improving lives in poor countries, provided lives are improved by roughly comparable amounts. But it now seems more plausible to me that saving a life in a rich country is substantially more important than saving a life in a poor country, other things being equal.
This kind of thinking gives cover to eugenicist beliefs like those Andreessen gives voice to in the manifesto when he writes: “[s]mart people and smart societies outperform less smart ones on virtually every metric we can measure.”
The naming of “smart people” is a vague yet loaded phrase, rooted in the racist application of concepts like IQ to justify forced sterilization of Black, Indigenous, and poor women.
This dovetails with the pronatalist views he espouses in this manifesto when he writes “[d]eveloped societies are depopulating all over the world, across cultures – the total human population may already be shrinking.” The word “developed” is doing a lot of heavy lifting here.
Pronatalism is a nationalist belief that the world—meaning most often the white, always the Western world—is underpopulated, so certain—white, Western—people need to have more children. It may seem counterintuitive that eugenics would align with pronatalism, but fundamental to the longtermist-driven pronatalist belief is that desire to make better humans, smart ones who will “outperform” as Andreessen so concisely wrote.
Andreessen here echos the racist great replacement theory which states that white people are being replaced by Black and brown people in a kind of “white genocide.” The only solution is to reproduce more than "them," the people from what Andreessen suggests are less smart, less developed societies.
For Longermists, the pronatalism stems from the calculated need to have more people alive in order to increase the happiness in the world. There is a level of moral obligation here,4 that if you don't have children then you are depriving these future people of existence. And those people should embody a very specific view of intelligence and development, one that supports techno-capitalist growth.
What are we going to do with all of these super smart very special people?
Expansion Is Best
As I wrote earlier, accelerationism is foundational to longtermism. MacAskill frames this as inevitable due to the rate of technology. This sounds very similar to the discussion of AI in Silicon Valley today. Sam Altman, CEO of Open AI,5 is constantly talking about how AI is inevitable so it’s his self-appointed role to make sure AI is safe for humanity.
The fear that AI is going to destroy humanity, that it is an extinction factor, is explicit in longtermism. Andreessen deviates from MacAskill here, writing that "all the machines work for us." But they both want AGI to exist.
There is no real questioning by Andreessen, MacAskill, Altman or any of these influential figures as to whether or not we may want to prevent the development of AI. It's going to happen, they say. The only thing we can do is try to control it for the benefit of humanity.
But why is it inevitable? Why isn't there another option?
Because longtermism is utilitarian, meaning it must prioritize the greatest good for the greatest number of people. There must be more people, and we have to do everything in our power to ensure that there are more of the (right) people.
Longtermism is pronatalist, as I wrote above, but on such a long timescale that it’s considering the potential of trillions of lives. The only way, MacAskill believes, that those lives can happen is through the technological advances that can come from AI. He at least is not unaware of the problems of extreme population growth: at some point, there will quite literally not be enough physical material to support humans. This is why we need AI: it will figure out how to get more from what we have, help us colonize the stars, and ultimately upload our consciousnesses so we can live forever.
In the manifesto, Andreessen writes:
We believe material abundance therefore ultimately means more people – a lot more people – which in turn leads to more abundance.
We believe our planet is dramatically underpopulated, compared to the population we could have with abundant intelligence, energy, and material goods.
We believe the global population can quite easily expand to 50 billion people or more, and then far beyond that as we ultimately settle other planets.
We believe that out of all of these people will come scientists, technologists, artists, and visionaries beyond our wildest dreams.
We believe the ultimate mission of technology is to advance life both on Earth and in the stars.
The philosophy takes at face value a kind of manifest destiny that is a continuation of the colonialist project. There is no moral room to argue against it, because then you're saying that trillions of people shouldn't exist. And you're back to the first principle of longtermism: future people matter.
The real question becomes, what kind of lives are these? For us living now, and for the trillions who may follow? Utilitarianism can hold that a life with even a very small amount of happiness is better than no life at all. Population ethics can lead to odd thought experiments of trying to discern what degree of pain is acceptable, especially if the total average happiness is high or each person is just slightly more happy than more miserable.
This is part of the paradox of the repugnant conclusion. It's is also a great example of our current economic system: the average wealth is high, but that's because a small number of people have huge wealth. However, the repugnant conclusion is that it's better to have this high average wealth than not, even if part of the population has significantly less wealth or has lives that are barely worth living. This helps to justify on behalf of these technocrats the wealth inequality throughout the world.
It follows then that an extreme utilitarian approach could see disembodied, uploaded-to-the-light-cone life as a good thing: no physical pain, no death.
But what kind of joy is there in being a human without a body? Can you even be human without one?
These philosophical questions are interesting.6 They show how through endless abstraction, powerful and wealthy technocrats can leverage the raw calculations of morality to justify their actions and create the future they want—regardless of how that future might impact everyone else.
If you believe Andreessen, there are only two sides: pro-tech, or anti-growth. That’s it. There is no room for the tech hopeful, as Cory Doctorow names people like me. There is no room for those who are concerned about the quality of life for people now. There is only endless growth at any cost.
“We believe that since human wants and needs are infinite, economic demand is infinite, and job growth can continue forever,” writes Andreessen.
Ah yes, the technocratic future: our consciousnesses are divorced from our bodies and applying for jobs amongst the stars so we can pay our digital landlords to house our digital selves. The brave new world is much like the current one.
Émile P. Torres was a large influence in my initial research and understanding of Effective Altruism and Longtermism in 2022. While I did not review or directly reference Torres’s work in these essays, I do want to credit them for their foundational impact on my thought. Torres has written widely about these philosophies. You can see their articles here.
Read The Series
MacAskill, for what its worth, does seem to be a true believer. He is consistent in his logic, and as such appears equally concerned about climate change and technological development. The technocrats who leverage his work to justify their own interests are demonstrably less concerned about climate change, believing in a passive inevitability that, as Andreessen writes, “We believe that there is no material problem – whether created by nature or by technology – that cannot be solved with more technology.”
“We believe America and her allies should be strong and not weak. We believe national strength of liberal democracies flows from economic strength (financial power), cultural strength (soft power), and military strength (hard power). Economic, cultural, and military strength flow from technological strength. A technologically strong America is a force for good in a dangerous world. Technologically strong liberal democracies safeguard liberty and peace. Technologically weak liberal democracies lose to their autocratic rivals, making everyone worse off.”
h/t Émile P. Torres who first surfaced this quote and information!
I am haunted by a quote in the Elon Musk biography by Ashlee Vance where Musk says smart women need to have more children.
again
and obviously, any mistakes in logic here are my own
Keep reading with a 7-day free trial
Subscribe to Think Piece to keep reading this post and get 7 days of free access to the full post archives.