5.6 KiB
Executable File
Why AI Art & Media Is Useless
Why AI Art & Media Is Useless
As someone who works in AI and genuinely believes in the value and power of LLMs to make professionals more useful and valuable, I can confidently say that I hate everything about AI image/video/music generation. It is useless and only serves one purpose: to replace creative professionals and the work they do.
The Scope of the Problem
To be a bit more precise about my hatred for AI media, I need to be clear that I don't hate AI. I've spent the last three-plus years devoting my entire professional life to leveraging AI tools to help professionals do their jobs more effectively. That said, from the moment I was exposed to AI art, I had the same initial reaction as most: a flood of anxiety and uncanniness, which I knew instantly I didn't like.
Leveraging an LLM to automate task creation from new emails I receive simply replaces something I spend 30 minutes doing every morning and allows it to occur in the background, producing the same output I would have arrived at. That's just helpful.
I am not an artist, but if I decided to start using AI to create all the graphics for a client, I wouldn't be improving anything that I currently do. I would just be replacing a potential job for someone who does art professionally.
The fundamental difference here is when a professional uses AI to improve the efficiency or quality of something they already do, it functions as a tool. When someone with no art experience uses AI to create art, it's not improving anything. It's simply replacing something that already exists with something worse.
The "Democratization" Lie
Access to the ability to create art is not the same as having the ability to create art. The moment everyone began conflating the ability to produce an output with the creator itself, they've already swallowed the Kool-Aid. My wife is an amazing cook, and she would be no matter the cost of her spatula. However, if I purchased the greatest spatula in history, I would still be a crappy cook.
Now let's talk about vibe coding, which is fundamentally different from image generation. I have learned more about writing code and development in the past year by using AI than I ever have. This is because things frequently do not work and therefore I have to go learn new information.
The key difference here is that vibe coding allows me to leverage my current knowledge as well as gain new knowledge, whereas generating an image simply produces an output that I have no ability to improve on. The reason I can't improve it is because simply going and looking up a bunch of information on how to create art will not make me a better artist.
The Collapse of Quality
Another important factor to understand here is that AI art isn't producing the worst work or the best work. It's producing the median of everything it has been trained on (actual artists' work). This is incredibly dangerous. It's essentially producing a blob of an over-generalized consensus on what looks "good." That doesn't work when you amalgamate every style and genre of art in order to produce something. This is not creative. This is aggregative.
Another problem here isn't that AI can't make art. Everything it makes is, by design, is just good enough. Therefore, this hits dead center in the sweet spot for what massive corporations are looking for. Why pay a junior designer to iterate on multiple concepts when an AI can generate you 200 versions of something that are all "good enough"?
Creativity Is Disappearing
Creating art obviously requires creativity, however, using AI tools simply requires knowledge. These are two very different things. Creativity isn't an output, it's an artist's struggle through years as they hone their craft and improve their abilities. It's thousands of micro-decisions that aren't just learned, but practiced over many years.
This matters because creative work embodies meaning and emotion that come from the artist. When AI generates an image, it remixes thousands of tokens to approximate what the user requested. Crafting an advanced prompt is a legitimate skill, prompt engineering, but it's not the same as creating art. These skills should never be conflated.
What Should We Do?
First, we need laws to stop major AI labs—particularly OpenAI and Google AI—from collecting human-made art as training data for their image generation models. We need strong regulation requiring artists to opt in before their work can be collected and trained on.
Second, we need more AI leaders to step up and stop this before it's too late. For example, Anthropic (makers of Claude) has never released an image generation model. That doesn't mean Claude can't be used to create websites or other graphic design work, but creating a UI or navigation menu is entirely different from painting on canvas.
AI Art Hurts the Future Potential of AI
It's clear that most people don't find AI art pleasing—they actively dislike it. With every piece of AI art slop that lands on Twitter or Instagram, the long-term reputation of AI as a useful tool for professionals takes another hit.
For years now, public sentiment toward AI has been declining. There's one culprit: AI-generated art and media. People who aren't knowledgeable about AI don't distinguish between media generation and other use cases that are actually valuable. This reduces the chances they'll ever consider the benefits of AI as a tool.
It's my sincere hope that we stop this race to the bottom before we get there. We should take all the resources and effort put toward AI media generation and redirect them toward leveraging AI as a tool for medical breakthroughs, building technology, and conducting research more efficiently.