Knowledge without goodness is dangerous
more thoughts on AI and the state of work + a little hiatus
My dearest readers, I’m deep in a personal project. I picked my head up today and realized it had been a few weeks since I sent a letter. While I don’t have a strict schedule here, I can see that I am not going to have anything worth sharing until I get through this lil’ sprint of mine. So, in respect of our lil’ reader-writer contract, I’m going on hiatus until mid-June. I may write, I may not, but I’m relieving myself of the obligation.
I’ve paused paid subscriptions, which means if you’re a paid subscriber (thank you!) your subscription has been “freezed” and will resume once I’m back, tentatively June 15th. I’ll email you to let you know when I’m restarting them, and you can always adjust your subscription here.
Last year, I wrote about so-called AI and how it’s not the threat to knowledge work that we may think:
My thesis was rooted in the idea that knowledge work’s worst threat is itself, the proliferation of David Graeber’s “bullshit jobs” that have the aesthetics of knowledge without the content.
This lack of content—the content that creates impact, not the content that generative AI can mimic at scale—is what drives the angst of the knowledge worker, so few of whom create anything from what they know, let alone learn new things that can be applied in new ways. The widgetization of knowledge work is the threat, and the perceived ability of AI to replace knowledge workers is an admonishment of the pseudo-productivity these roles reward. As I wrote last year, “AI can replace your job because your job right now is meaningless.”
When I read this essay now, over a year later, I am struck by how true it remains. The swift destruction of labor as we know it has failed to emerge.1 A new GPT model release feels like a new iPhone: more of the same. And as we continue to discover, much of what AI impressively does is actually humans.
I’m not so foolish as to believe that these tools won’t continue to improve. They will, though the limits of the physical world and infrastructure temper ambition. But I am a little surprised that a year on so little has changed.
The AI hypecycle has been well-documented, but that it remains a fairly niche2 product is curious to me. My friends are a fairly specific bunch to be fair, but I know only two who use generative AI tools regularly, and one of those is required to for work. My sole use case remains automated transcriptions, though during my job search I did try using AI tools as was endlessly suggested to me and found them entirely useless.
I keep waiting for the moment when utility outweighs novelty, and I find some reason to overcome my disdain. I keep waiting to, as someone with an inculcated fetish for productivity culture, uncover the self-maximization that has been promised. Instead, I see my time being pushed to cognitively demanding tasks, strategic analysis, and human connection: the very things AI cannot do. There is nothing that I do for clients or myself that can be replaced by AI, because my work and my time has content. I do not move around information, but apply knowledge in novel settings with discernment.
AI does not threaten me, but I know it will continue to be used to threaten workers. It is a tool of control, and as such is the ideal load-bearing pillar of an economy that is increasingly more psychological than financial, ruled by a fantasy of rationality that privileges its believers. AI is a tool of this time, something we can project on, believe in, hope for even as it fails to do the bare minimum of what has been promised.
I’m left now with a sadness about the money spent, the data centers built, the abusive jobs created. Some part of me does believe in the promise of technology. I’m someone who, a earlier in the century of my birth, may not have survived birth at all, and probably would not have survived childhood, and if I had probably would have been institutionalized as a young adult. The technology and culture of my time has enabled me to live at all, to grow into a person, to find a deep contentment and joy in this human project. So some part of me wants to believe that that tech can save us, that more and new and better will create the conditions for freedom and equitable resource-sharing and collective joy.
My formative years were spent at a school founded by a guy who wrote this about the purpose of education: “Goodness without knowledge is weak and feeble, yet knowledge without goodness is dangerous, and that both united form the noblest character, and lay the surest foundation of usefulness to mankind.”
And there’s the rub: knowledge, insofar as it continues to exist as something separate from information, is not inherently good, and the application of it is a moral activity. Many workers are neutered then, unable to develop knowledge or goodness, restricted in their movements by the confines of a technocratic system developed with a religious belief in rationality that leaves no room for personal moral or intellectual development, a system that so fears weakness that it traps goodness in spreadsheet calculations and access to a simulacrum of knowledge in language models, coldly removing the human element required for the development of a noble character.
AI cannot be good. It cannot wield knowledge with experience, and it cannot discern the ethics of application because it cannot think, and thus cannot truly be useful. Any utility it achieves will be due to the humans who leverage it, and like with any tool, the goodness of the humans will dictate the outcomes. But what kind of humans will be left if what we learn and what we make are funneled through the very tool we are meant to control?
Due in no small part to labor fighting back (some irony in having to link to the LA times)
h/t