The problem with what we refer to as “AI” today is that “artificial intelligence” is a terrible term for it that obfuscates what these tools actually do.

Most text-based tools are Large Language Learning Models (LLLMs); these take a large body of sample text to try and, well, learn language from it. What words go in front of one another, what words are used in certain context, etc. They reverse-engineer a rudimentary understanding of language from text samples. Similarly, generative image models draw on a large amount of image data to create their output. The data pool comprises “tagged” samples—data associated with metadata that explains context, style, formality, and so on, which is data created by humans. This is what allows an LLLM tool to pick words of the appropriate formality when asked to, say, write a cover letter for you.

There are some notable problems with the way this works compared to how the average layperson thinks it works.

There are No New Ideas in the Data Lake

The reason why I dislike calling these programs “Artificial Intelligence” is rooted in the fact that there is no intelligence behind it. Types of machine learning can do some really impressive and useful things, like learn to recognize cancerous moles, but when it comes to generating text, the machine isn’t “thinking” the way you’d expect an intelligent being would. It’s merely recombining words and ideas that it’s already seen into slightly different forms.

This kind of thing is useful for predictive text typing, obviously—making typing on a cell phone faster—but it means that it is absolutely dogshit at coming up with new ideas. Worst case scenario, it plaigarizes something very obvious, but even in the “best” case scenario, it’s not putting intention into text. It doesn’t understand pathos, or know how to represent a character with real human emotion; it can just mime at those things. It can’t coherently lace themes into a text because it’s not approaching it from a holistic standpoint, just from a perspective of what “should” go next, and it’s not coming up with things outside the box it’s been presented. Details are frequently changed or forgotten over the course of text.

The result is odd, meandering SEO articles about nothing to get you to click a website, and fiction with no point and no scaffolding under the paper-thin surface. A thousand monkeys at a thousand keyboards may eventually produce Shakespeare by accident at about the same likelihood as ChatGPT, but that doesn’t make the process intelligent or efficient.

Our current era is driven by a huge amount of consumer demand. The average person reads in a year more books than the people of a century or two ago read in their lifetime, probably, and the fact that companies must continue to make number go up to appease the shareholders means that they have to continue finding new things to sell people. “AI” is tantalizing because it makes the creation of “content” faster and lower-effort, for sure, when you need more grist to feed through the mill. But I think anyone who believes ChatGPT could make them the next great American novelist or Nobel Prize in Literature winner is fooling themselves.

Some people looking for a very specific type of tropey fix might be sold on it, but I don’t think the result will be memorable or interesting. People don’t just read to see a story unfold—they read because it holds a mirror up to their experiences, because it makes them think about something in a new way, because they want to see an expert discuss something, because they want to feel something visceral alongside the characters. “AI” can do some of those things by accident, but ultimately, it’ll produce things that are forgettable and derivative because of what it has to draw on.

Triangulation is Not Knowledge

Everything Has a Cost

All of the above arguments are well and good, but the one I tend to lean on these days is a little different. Leaving aside the value or ethics of anything produced by a piece of GPT-4 software, if you’re not convinced—

Large data models like GPT-4 and its cousins use a huge amount of energy because it’s running on a large bank of heavy-usage computers that need to be both powered and cooled to keep them from overheating. A single GPT-4 query uses the equivalent of .5-1 bottle of water. Think about how many people are using that every day, and how many of those queries actually… need to happen.

The newer DeepSeek model from China does use a significantly lower energy cost. However, it’s still something I’m deeply wary of using for non-essential reasons considering that we’re in the midst of a climate crisis and at kind of a pivotal moment where we can influence the impact of how bad it is. Global warming is already causing the extinction of untold species, destroying homes, and killing people via natural disaster. If me not using “AI” will prevent even a fraction of that, I think it’s worth it.