For most of my career, I’ve worked in the gap between creative people and technology. As a creative technologist, my job has been to help artists, designers, and organisations turn ideas into interactive digital things. It’s rewarding work, but it comes with a persistent frustration: the people with the most interesting ideas are often the ones least able to build them. The cost and complexity of software development means that creative visions regularly die in the space between imagination and implementation.
Then, sometime around 2024, something shifted. Large language models got good enough that you could describe what you wanted in plain English and get working code back. Not perfect code. Not production-ready code. But functional, testable, surprisingly capable code. The barrier between “I have an idea” and “I have a prototype” started to collapse in ways I hadn’t expected to see in my lifetime.
What is Vibe-Coding?
In February 2025, Andrej Karpathy gave this practice a name. He described a way of programming where you “fully give in to the vibes” and “forget that the code even exists.” You describe what you want, the AI writes the code, you run it, see what happens, and iterate from there. He called it vibe-coding.
The name stuck because it captured something people were already experiencing. You don’t need to understand the code the AI writes. You don’t need to know what React is or how a database works. You just need to be able to describe what you want clearly enough for the AI to have a go at building it.
For non-technical creatives, this is a big deal. A designer who can imagine an interactive installation but can’t code it can now describe it and watch it take shape. A small arts organisation that needs a custom tool but can’t afford developer time can have a go at building one themselves. A musician who envisions a novel performance interface can prototype it through conversation rather than learning to program from scratch.
How I Fell Into This
Three months into my doctoral programme at Royal Holloway, University of London, I sat in Google AI Studio describing features for an educational chatbot. It was for a CDT industry challenge, a collaboration between our research centre and an education partner working with children who have special educational needs. Over a few intensive sessions, I watched a functional system take shape: authentication, themed AI characters, text-to-speech, even experimental voice interaction. I had written almost none of the code myself.
This wasn’t how I’d imagined starting a PhD. I’d enrolled on the UKRI Centre for Doctoral Training in AI for Digital Media Inclusion expecting to study AI-assisted creative tools from a scholarly distance. Instead, I found myself doing the thing I meant to study. The experience was disorienting and exciting in equal measure. Features that would have taken me weeks appeared in hours. Technical barriers I’d assumed were permanent dissolved into dialogue.
But it also raised questions I couldn’t ignore. The chatbot worked impressively in a demo. Could I actually deploy it for real children in real classrooms? I genuinely wasn’t sure. There were security considerations, content safety questions, accessibility requirements, and infrastructure decisions that no amount of chatting with an AI had addressed. I’d crossed one barrier only to discover a whole set of new ones I hadn’t anticipated.
Why This Matters Beyond the Hype
The optimistic story writes itself. AI democratises development. Anyone can build software. The gap between idea and implementation disappears. But as with most technology democratisation claims, the reality is more interesting and more complicated than the headline.
My research is about understanding that reality. Specifically, I’m asking: what do non-technical creative people actually need to know to use these tools well? Not to become programmers, but to build things that are safe, functional, and genuinely useful rather than just impressive-looking demos that fall apart under real use.
There’s a pattern I keep seeing that I’ve started calling the compression gap. AI tools compress the journey from nothing to working prototype dramatically. Things that would take days or weeks in traditional development reach functional states in hours. But the final distance from “working prototype” to “something you could responsibly put in front of real people” still demands knowledge that no amount of prompting can replace. Security review. Accessibility. Edge case handling. Deployment configuration. The journey to a convincing demo is fast. The journey to something you’d actually deploy is not.
This matters because if we don’t understand where these tools genuinely help and where they create a false sense of capability, we risk two bad outcomes. Either people build things that look great but are fragile, insecure, or unreliable. Or they hit the wall between demo and deployment, get discouraged, and conclude the tools don’t work, when actually the tools work fine, they just need to be used with the right kind of understanding.
What I’m Researching
My PhD sits at the intersection of creative technology, human-computer interaction, and AI for digital inclusion. The full title is Vibe-Coding as a Creative Design Medium: Human-AI Co-Creation for Non-Technical Creatives, which is the most academic way of saying “I’m trying to figure out how to help creative people build things with AI without everything going wrong.”
The research has a few strands. I’m building real things with real partners through conversational AI development, documenting what works and what doesn’t. I’m studying how other non-technical creatives experience vibe-coding for the first time and how their skills develop over time. And I’m designing tools and approaches that could make the whole process safer and more reflective.
In the next post, I’ll tell the story of two projects I built as part of this research, an educational AI assistant and a full-stack arts sector platform, and what happened when I deliberately tried to build them without writing any code by hand.
I’m a PhD researcher at Royal Holloway, University of London, funded through the UKRI Centre for Doctoral Training in AI for Digital Media Inclusion. You can find me at chrischowen.com or reach me at contact@chrischowen.com.