← All writing

I Watched AI Come for the Artists. Then I Saw It Coming for Me. So I Leaned In.

Part 6 of 6
I Watched AI Come for the Artists. Then I Saw It Coming for Me. So I Leaned In.

I’ve been bookmarking tweets about AI since mid-2023. Over 450 of them now, spanning nearly three years. I recently let Claude analyse the whole collection, partly out of curiosity, partly because my PhD is about this exact subject and I wanted to see what patterns three years of instinctive bookmarking had left behind. What I found was a story I’d been living through without quite seeing the shape of it.

The earliest bookmarks are from a time when AI-generated content was a curiosity. Someone posts an AI-generated podcast. A game studio experiments with AI avatars. Interesting, but distant. Then through 2024 the bookmarks start to change. Artists begin losing work. An artist gets banned from the Sailor Moon subreddit because her 3D renders “reek of AI.” She has proof videos of hundreds of hours of manual sculpting. Doesn’t matter. The ban stands and 128,000 people engage with her tweet about it. Ed Newton-Rex starts publishing the data: 58% of photographers have lost work to AI, 32% of illustrators have lost commissions, 86% of authors say AI has reduced their earnings. A tweet calling AI enthusiasts the best evidence that some people don’t have a soul gets 238,000 likes. A quarter of a million people hit the heart button on that.

I watched all of this happening and I recognised something. The complaints from artists, the displacement, the feeling of having your craft devalued by a machine that learned from your work without asking, all of that was heading straight for my profession too. I’m a creative professional. I work with technology. The same capabilities eating into illustration and photography and music were going to eat into the way I worked. That much was obvious from the trajectory.

The question was what to do about it.

Choosing to lean in

Rather than wait for it to arrive, I enrolled in a PhD to study it. And I started using the tools myself, heavily. Claude, Cursor, voice interfaces, the whole stack. I built research platforms through conversation with AI. I built prototypes for industry partners. I built teaching materials. Things I genuinely could not have built alone, or at least not in the time I had, and in some cases not at all given my skill set at the time.

That choice felt right and it still does. But it also felt complicated in ways I didn’t fully anticipate, and my bookmarks track that complication in real time.

In February 2025, Andrej Karpathy posted the tweet that gave all of this a name. “Vibe coding,” he called it. 33,000 likes. He described surrendering to the vibes, accepting all code suggestions without reading them, copy-pasting error messages with no comment, using a voice interface so he barely touched the keyboard. Software appearing from conversation. The term caught fire because the practice already existed. I’d been doing a version of it for months. Suddenly it had a label, and with the label came both legitimacy and a target.

Within weeks Claude got MCP integrations with Blender, Figma, Unreal, and Unity. Three days for four major creative suites. A single prompt generated a working flight simulator. Sahil Lavingia offered VC money to vibe coders. The speed was staggering. And I was right there, riding it, building things faster than I ever had.

The concerns didn’t go away

Here’s the thing I keep circling back to. I chose to embrace vibe coding because I could see what resisting it had done to other creative fields. The artists who fought it lost income anyway. The photographers who refused to adapt watched their commissions disappear. Leaning in felt like the pragmatic move. And it was. I can build things now that would have been completely beyond me two years ago.

But the same concerns that animated the artists’ anger haven’t dissolved just because I’m benefiting from the tools instead of being displaced by them. Not yet, anyway.

A teacher posted that he couldn’t describe how much he hated what AI had done to grading. “I hate finding it, I hate the paranoia it fosters, I hate the confrontations.” That got a quarter of a million likes. The paranoia point lands hard. When anyone can generate convincing work through conversation with AI, trust erodes everywhere. It erodes in classrooms, in hiring, in creative communities. That Sailor Moon artist wasn’t using AI. She was accused anyway and punished for it.

And then there’s the psychological dimension that my bookmarks capture in ways I didn’t expect. Sam Altman, the CEO of OpenAI, described feeling “a little useless, and it was sad” when his own AI suggested better ideas than his. A developer named Mo Bitar posted a video called “I was a 10x engineer. Now I’m useless,” and two of the responses in my bookmarks get at something the productivity conversation avoids entirely. Steve Skojec compared it to cheat codes: amazing at first, then boring, and you can’t go back once you’ve seen it. Adam called the drug analogy apt: once you have the button, you can’t not press it.

I recognise those feelings. I have built things with AI that I’m proud of and I have also had the experience of knowing, quietly, that I couldn’t reproduce them without the tool. That’s a strange place to sit. It’s the same dependency the artists warned about, wearing different clothes.

Anthropic published research in January 2026 showing that coding with AI leads to decreased mastery. The company selling the tool, publishing evidence that using it dulls your edge. I use that tool every day.

What I think is coming

A year after his original tweet, Karpathy posted again. The tone had shifted. He described giving an AI agent a task in English and walking away for 30 minutes while it researched, coded, debugged, and deployed on its own. “Programming is becoming unrecognisable,” he wrote. The vibe coding I do now, the conversational back-and-forth with Claude, is already being overtaken by something more autonomous. The human moves from collaborator to supervisor.

Tim Sweeney flagged that Google would confiscate all profits from AI-generated content on YouTube. The tools democratise creation while the platforms capture the value. That pattern, where capability flows outward and money flows upward, was the artists’ complaint from the beginning. It hasn’t changed. It’s just reaching more people now.

My PhD is investigating what happens in this middle space. Not the hype and not the backlash, but what the practice of building software through conversation with AI actually looks like for people who come from creative fields rather than engineering. I’m running studies, building the tools participants use, trying to understand what kind of creative practice this is and what it costs.

I don’t have clean answers. I chose to lean in and I’d make that choice again, but I can’t pretend the concerns are behind me. The artists I watched get displaced weren’t wrong about what was happening to them. They were early. The question I sit with now is whether I’m the next wave or whether I’ve found a way to stay ahead of it, and whether there’s actually a difference.


Chris Chowen is a practice-based PhD researcher at Royal Holloway, University of London, part of the UKRI CDT in AI for Digital Media Inclusion.