“The singularity” is a well-worn concept in sci-fi. Wikipedia introduces the concept as follows:.

The technological singularity—or simply the singularity—is a hypothetical future point in time at which technological growth becomes uncontrollable and irreversible, resulting in unforeseeable consequences for human civilization. According to the most popular version of the singularity hypothesis […] an upgradable intelligent agent will eventually enter a “runaway reaction” of self-improvement cycles, each new and more intelligent generation appearing more and more rapidly, causing an “explosion” in intelligence and resulting in a powerful superintelligence that qualitatively far surpasses all human intelligence.

This is your classic HAL9000, Skynet, or the precursor to the events in Dune scenario that means the only computers are people called “mentats”. A computer gets so smart it figures out how to take over all the other computers; bad things happen. It’s a good story.

The thing is though, what if the really scary thing isn’t an AI that takes over everything, but rather one that is exactly stupid enough it can happily take over the boring office jobs for the rest of time?

Office jobs, on the most abstract level, are what Bret Victor calls the ”small rectangle based knowledge economy”. Abstractly, human workers input information from one set of small rectangles (paper or screen), and convert it into other small rectangles. This chain of documents and people — a bureaucracy — is the foundation of modern life in most countries. Max Weber summarised this as far back as 1921:

A mature bureaucracy is an almost indestructible social structure. […] Once an administration is fully bureaucratized, a virtually permanent structure of domination ties is created and the individual [civil servant] cannot escape the apparatus in which he is situated.

David Graeber’s instant classic Bullshit Jobs documents the impact of this today. In an interview, he described them as follows:

Bad jobs are bad because they’re hard or they have terrible conditions or the pay sucks, but often these jobs are very useful. In fact, in our society, often the more useful the work is, the less they pay you. Whereas bullshit jobs are often highly respected and pay well but are completely pointless, and the people doing them know this.

[…]

A lot of bullshit jobs are just manufactured middle-management positions with no real utility in the world, but they exist anyway in order to justify the careers of the people performing them. But if they went away tomorrow, it would make no difference at all.

Pointless and wasteful jobs, that are nevertheless seen as highly socially important by a global capitalist bureaucracy, that exist to justify the existance of the system itself? Is this sounding familiar? Let’s have a look at what OpenAI are actually trying to build:

OpenAI’s mission is to ensure that artificial general intelligence (AGI)—by which we mean highly autonomous systems that outperform humans at most economically valuable work—benefits all of humanity. https://openai.com/charter. (emphasis added)

Is it me or does this sound like OpenAIs real economic goal is to make a machine that can do bullshit jobs?


I think this is already well under way. I’m really struck after being told that AI is the next big thing for so long, there is still no actual use case for it that actually need to happen. An MIT Technology Review article tells us “Hundreds of AI tools have been built to catch covid. None of them helped.”, but urges us to carry on trying anyway. In my personal life the uses people tell me for ChatGPT are all things that shouldn’t have to exist anyway: making letters to your landlord sound more formal so you don’t get evicted, teetering on a plagurism knife edge for university essays that people only are writing so they can get a job anyway, generating plastic looking AI art for a laugh.

We are currently living in what many institutions are referring to as ”The Polycrisis”, a state of complex systems collapse where the things that make the world tick are coming undone. We’ve got really used to things simply not working properly, pieces of machine no longer connected.

This week I was stuck trying to order a repeat prescription between my GP’s automated phoneline, the NHS app, my pharmacy’s website, and various automated email services. Eventually, of course, the only way to fix it was to talk to a human at my surgery. I think most of us experience things like this multiple times on a weekly basis. Anything we do seems to involve 5 different services, each with their own terms of service, privacy policy and tens or hundreds of subprocessors we click “accept” on because we just need medicine, or food, or access to information.

Each of these pieces individually is a prime target for AI replacement. Each individual cog in the (broken) machine will probably make an easy case for replacing itself with an AI. As long as it can look like it’s doing the job, people will continue to be paid maintaining it, and doing whole system reform is a long term and difficult task that no-one has the stomach for. As Richard Cook’s classic piece reminds us, ”catastrophe requires multiple failures – single point failures are not enough”. And what describes the current state of the world, the polycrisis, better than catastrophe?


I like to think we are approaching the epoch where information on a screen, from someone you don’t trust, is going to be seen as complete junk, not worth the pixels it’s rendered on. As AI output gets fed into AI input, the signal to noise ratio of the whole thing will reach unworkable levels for human activity. There won’t be a single point where this can be identified as needing improvement, as AI models are already inscrutable, unidentifiable, by nature of how they work. Again — this is already happening. I think it’s just taking people a while to realise where it’s headed: not towards a new global superintelligence that wants to kill us all, but rather towards possibly collapsing the human capitalist death machine that is already killing us all.