The Case Against Generative AI

Generative AI does not need any preamble: if you made your way to this blog post you are familiar with AI chat bots such as ChatGPT and Deepseek and you have almost certainly consumed AI generated content in the form of images and videos. You have also been forced to form an opinion on it, as it has permeated into every facet of human life, from Facebook memes to critical systems development. Some view it as the next step in consciousness: once the technology is mature enough and the singularity is reached, it could render humans redundant as it can potentially do anything we can, only better and faster. These acolytes are a minority but rather vocal and therefore important to note (from personal experience, they are also typically the people who are developing these systems and therefore stand the most to gain from the hype).

Most people however, take a more moderate view: although they do not fully believe the promises that their tech overlords are selling them, they acknowledge that the technology is indeed impressive and have incorporated it into their daily lives. This is, on its face, a completely reasonable response: after all, we give up so much of our daily lives to the demands imposed by work that little time is left to actually enjoy ourselves. With wages not meeting the cost of living, a brutal economy of time is created which forces people to give up more and more of their lives for less and less gain. Under such conditions, it is not surprising that people en masse uncritically incorporated AI into their lives since it can greatly increase your productivity while also reducing the cognitive strain you put on yourself while working. On top of all of this, many models can also generate creative works such as images or videos of at least passable quality that previously would have to be commissioned to a human artist. The cherry on top is that many of these models are free to use, at least to some extent. Looking only at all of these positives, this technology could be seen as an incredible boon to the average worker, as it allows them to flip the economics of time in their favour.

There is, however, no free lunch. In fact, in this blog post, I will argue that the lunch offered by Generative AI is so costly that no one should eat it. I will not discuss in any way the quality of these models (I do not use them personally, but most people I know that use them seem to be reasonably satisfied with how they work). Instead, I will focus on the corrosive aspects that they have both on their user base as well as society at large. Although all of these points are valid regardless of how one chooses to use Gen AI, I will write this post from the perspective of a programmer.

Plagiarism

AI models work by collecting mountains of data and feeding it into The Machine. The Machine will then spit out things that have similar patterns to the data that it consumed. When feeding The Machine, its acolytes do not compensate or even acknowledge the people who produced this data. This is plagiarism. When someone uses an AI model for whatever purpose they are committing plagiarism, albeit in a rather indirect form. This is my main point of contention; even if the remaining issues listed in this post were solved, nothing will ever change the fact that these models are, when stripped to their very core, industrial scale plagiarism generators.

If the data was not stolen but rather purchased and its creators duly compensated, I would be more inclined to use Gen AI or at least tolerate how ubiquitous it is. This is obviously a pipe dream: none of these AI companies are profitable at the moment, if they actually had to meaningfully pay for the data they use, their business model would immediately collapse. Any time I see AI generated content, I am always acutely aware that it is the creation of a machine powered by the ghosts of dead dreams.

The most common argument against this line of reasoning is that everyone who creates things for a living, whether they be engineers or artists, also takes inspiration from the creations of others. No one expects them to compensate everyone that inspired them so there is no reason why this standard should be applied to Gen AI.

This argument builds off of the faulty premise that Gen AI is in any way similar to human cognition. It is easy to fall prey to this fallacy since the content it produces can be eerily similar to that a human is capable of. However, it is important to remember that AI is not "inspired" by the content it consumes; it instead uses it to build a statistical model that it then uses to infer what is the most appropriate output from any given input. This differs from the creative process as we experience it since these models are rigidly bound by the content they have consumed. Humans on the other hand, can take their lived experiences and use them to build something that no one has ever seen before.

In the early 1800s, Mary Shelly wrote her first novel, Frankenstein. It would go on to be recognised as one most groundbreaking novels ever written, effectively creating the science fiction genre. It established narrative conventions (such as the mad scientist trope) that would go on to be used in a variety of different SF stories, books, movies and TV shows. It was as influential as it was because, on top of being exceptionally well written, Shelly created something that did not fit neatly into any of the pre existing genres of her era (although at the time it was considered a Gothic horror). If you fed all the content available to Shelly into a Gen AI model and asked it to create a Gothic horror novel, it would never produce anything as innovative as Frankenstein. It might spit out something that is pleasant to read, but it will never be able too see past the perspective of the people whose creations it stole.

All this considered, by using these models, we are effectively accepting that the works stolen by the AI cartel are literally worthless: they can and should be taken without permission, extracted for profit and not even acknowledged. And if we accept that, what fresh horrors will the tech industry force us to accept in a few years time?

Cognitive lock-in

For those who work mostly on the computer, one of the most satisfying experiences one can have is watching a Vi or Emacs wizard at work. Seeing someone rattle off keybindings to move their cursor to the exact right location and write the perfect piece of code that makes a given error go away is magical. Even now that I have gotten decently good at Emacs, it has only increased the respect I feel when I see someone show off their workflow in these editors. In displaying ones proficiency, these users also display the effort they put in order to get to where they are. It's fun, impressive and a little humbling.

Watching someone use AI chats gives me the exact opposite feeling: I am not watching a professional who has tamed their tool of choice, I am simply seeing someone who has figured out what they need to input into the plagiarism factory to output what they need. No proficiency, no expertise, only vibes. The obvious response to this is "Who cares if using AI doesn't look cool? As long as it gets the job done quickly, the end result is the same." This argument falls apart however when you acknowledge that using AI to generate something means alienating yourself from the creative process.

There are times when this is an acceptable trade-off. When I started to become aware just how endemic ChatGPT was, I asked my colleagues if they used it and what the hell they were putting in that little box. One of them said that they used ChatGPT once to "write a python script to create an excel spreadsheet" which is a task no one should ever have to do. When a task is boring, annoying and bureaucratic, there is an argument to be made that we should allow people to use AI since their time is better spent elsewhere.

Naturally, this comes with a cost. Each time you delegate a task to an AI, you reduce your own ability to create something similar unassisted. This means your work could take a hit if these models are ever updated in such a way that they become worse at whatever it is you rely on them to do. Even worse, the business model that many AI companies employ may not be viable in the long run. None of these companies are profitable, as far as I'm aware, and it is very possible that in a few years time we might see an increase in prices in order to use these assistants. Worse, these companies might also have to be bailed out be the government and their costs socialised as they have reached the same status as the banking and housing industry: Too Fail to Fail. Since so many people are already so reliant on these AI services to do their work, we will simply be forced to fold and pay whatever toll is expected of us regardless of whether or not we actually use these assistants.

Vendor lock-in is always is an issue whenever you have to use any kind of proprietary software, but with AI it is even more egregious. We are no longer surrendering just our data to the whims of the tech industry, but our cognitive abilities. I started this section talking about Vim because I wanted to make a contrast between the kind of thinking that these tools encourage. Both bill themselves, at least in part, as productivity enhancing software, but one does so by rewarding work, experimentation and discipline while the other rewards mindlessly plugging prompts into a sterile box until you get what you are looking for.

Although I do recognise that it is possible to use this technology in a way that minimises cognitive atrophy, I do not believe that the consumer public should be trusted with this responsibility.

Data privacy

The fight for data privacy is a losing one; even a decade after Edward Snowden's bombshell revelation of a global surveillance network controlled by the US, I don't think this is an issue that anyone cares about. We have accepted that the price for using digital spaces is to surrender all of our personal information to basically anyone that wants it, from advertisers, to AI companies, to the American government. People care about this so little that they are willing to pay to surrender their own personal data to tech companies. The most egregious example is ancestry sites that charge a fee for you to surrender your DNA so that they can tell you your level of racial purity (as an aside, please do not give your DNA to any government or corporation without a warrant).

Data privacy is a concept that is too abstract for people to really wrap their heads around. Even if people intellectually understand the problems with surrendering your personal information to private entities (and I think most people do), it is almost impossible for this knowledge to affect their consumer choices, since the consequences are so far removed their actions.

Because of this, I won't harp on this subject for too long, I will only say that it is very worrying to me that so many people put personal and professional information into AI assistants almost uncritically. Not to mention that whenever you have a conversation with one of these services, you are giving away free data that these companies can use to train their models. If you are willing to accept that this is the price to use this technology, so be it. Just be aware that every time you put something into an AI, you are directly helping a company that would gladly sell your organs if it meant they could turn a profit.

A Demonic Inversion

I have written about how personal assistants are changing the way people work, but there is another facet of Gen AI that I have not discussed: content creation. Content creation is a term I have never been very fond of, as it flattens all artistic expression into a single homogeneous soup. Also, it sounds like a euphemism for taking a shit. Nevertheless, it seems applicable when describing what others call AI "art".

In the later half of the 20th century, a common anxiety was the role that automation would play in humanity's future. While many were worried about the possibility of some kind of purge of humans from the workforce, many also began imagining a world where people are free from the drudgery of manual labour. A world where the exploitation that is necessary to maintain our lifestyles is offloaded to autonomous systems and humans can devote themselves to creative endeavours.

As the years went on, we quickly realised that these seismic changes would not come to pass. Although our relationship with work changed, automation did not play as big of a role as many had expected. Eventually, as technological progress mostly plateaued, we came to terms with the fact that technology was not going to release us from our jobs.

In the last few years with the rise of Gen AI, the threat of automation looms large once again. The difference now is that the tech sector is not promising to automate menial labour but rather artists, writers, directors, actors and other forms of creative expression. There has already been some success in doing this: many music acts are using AI for their album covers as well as employing AI vocals into their music, most notably Drake who used AI generated vocals of Snoop Dogg and the late Tupac Shakur in one of his Kendrick disses last year. Some movies have tried to incorporate fully artificial actors, most recently Ian Holmes' "cameo" in Alien: Romulous and Chrisotopher Reeves in the recent (hideous, awful, evil) Flash movie. In the world of book publishing, the introduction of AI means that pretty much anyone can write a book which has created a deluge of worthless content directed at publishers and creative writing competitions.

I have already touched on how AI can never replicate human innovation and creativity, but even if it could, there is no reason why anyone should accept this. The Utopian future that the Hollywood Sickos and the AI acolytes are promising is one where you will not have to creatively exert yourself in anyway, as that work will be all be done by machines. Instead, we get to spend our lives maintaining these systems, come home and plug a prompt into the Slop Generator and consume whatever it spits out. The promise of automation as been completely inverted: humans will be slaves to the machine.

I could critique this imagined future at length, but I don't think that all of human artistic expression can be relegated to AI. Nevertheless, the Sickos are clearly frothing at the mouth at the idea of replacing pesky artists with machines that don't talk back and, crucially, don't try to unionise. Although the maximilist vision espoused by many in the entertainment industry won't come to pass, AI will still undoubtedly be used in ways that undermine artists. The most obvious (and I fear permanent) is that many people are now turning to AI slop generators whenever they need some very specific drawing instead of commissioning the piece from a human artist, whether it be for a book cover, advertising, etc… This leads to not only less artists and worse art, but also more slop flooding the web.

Honorable Mention : Hallucinations

Sometimes AI models generate things that do not make sense. I don't really care about this. In fact, I hope they generate even more hallucinations so that people stop using these AI models.

Conclusion

In conclusion, Generative AI is a land of contrasts. Some might say it is an evil technology powered by mass theft which is making us stupider, less creative and evermore beholden to tech companies. Others might say that it can generate many useful things such as excel spreadsheet and child pornography. Who's to say who is right?

Emacs 29.3 (Org mode 9.6.15)