So far in this series I’ve talked about why I’m researching vibe-coding, what happened when I built two real systems through conversational AI development, and what non-technical creatives need to know to use these tools well. This final post is about where the research goes from here.
The short version: I’m moving from studying my own practice to studying other people’s, and from identifying problems to designing solutions.
From My Experience to Other People’s
The biggest limitation of what I’ve done so far is obvious: it’s all based on my own experience. I have some technical background, even if I’m not a professional developer. The failure modes I identified and the literacy framework I developed might underestimate what’s difficult for someone approaching these tools with no technical intuition at all. Or it might overestimate it. I genuinely don’t know yet.
That’s why the next phase involves two participant studies. The first will bring in 12 to 16 non-technical creative practitioners for their first extended encounter with vibe-coding. I want to see what happens when people who’ve never done this before sit down with an AI coding tool and try to build something. Where do they get stuck? What knowledge do they reach for that they don’t have? How do they calibrate their trust in what the AI produces? Do they experience the same failure modes I documented, or do entirely different patterns emerge?
The second study follows a smaller group, 6 to 8 people, over several weeks. I want to understand how capability develops (or doesn’t) with sustained practice. Do people build transferable skills, or do they become dependent on specific tools and prompts? Can they take what they learn in one project and apply it in a different context? This longitudinal view is important because vibe-coding’s value depends not just on what you can produce in a single session but on whether the experience builds lasting capability.
Both studies will test and refine the Minimum Viable Literacy framework I introduced in the previous post. The version I have now comes from my own practice. The version that emerges from watching other people engage with the same challenges will be more robust and more useful.
Building Better Tools
Understanding the problem is only half the research. The other half is doing something about it.
The intervention I’m designing is essentially a set of tools and scaffolds that sit on top of existing AI coding environments. Rather than building a whole new platform from scratch (which would be a PhD in itself), I’m working on something that modifies the experience of using existing tools to make it safer and more reflective.
The concept includes a few interconnected pieces. First, structured prompting strategies that guide the AI to explain what it’s doing, pause at risky moments, and check assumptions rather than charging ahead with defaults that might be wrong. Second, scaffolds that surface the invisible stuff: when your code depends on infrastructure that hasn’t been set up, you should know about it before you waste an hour debugging the wrong thing. Third, reflection points that encourage practitioners to think about what they’re building rather than just accepting whatever the AI produces.
The idea isn’t to slow people down. It’s to make the process more transparent so that the understanding develops alongside the output. A tool that generates code without supporting comprehension might accelerate individual projects while creating the kind of dependency that erodes capability over time. A tool that makes its reasoning visible, highlights its assumptions, and prompts you to think at the moments that matter builds something more durable.
I’ll be testing early versions of these tools during my CDT industry challenges over the next couple of years. Each challenge is a chance to try a progressively more developed version of the scaffolding in a real context with real stakes. The first iteration will be simple: just some structured prompting strategies that I use in my own workflow and document the effects. Later iterations will be more sophisticated, incorporating what I learn from the participant studies.
The Bigger Question
Behind all the practical details is a question I keep returning to: when does AI assistance enable genuine creative empowerment, and when does it create fragile dependency?
My evidence so far suggests the answer isn’t inherent to the technology. It depends on how the tools are designed, how they’re taught, and how practitioners engage with them. People who question outputs, investigate failures, and build intuitions about reliability develop real, transferable capability. People who accept everything uncritically and lean entirely on AI judgement risk losing the very skills that would make them effective users.
The minimum viable literacy framework is my attempt to get specific about what that difference looks like in practice. The intervention tools are my attempt to design environments that encourage the reflective, capability-building mode of engagement rather than the passive, dependency-creating one.
When This Will Be Done
I’m about 18 months into a four-year PhD programme. The participant studies will run through 2026 and into 2027. The intervention tools will be developed and tested iteratively alongside those studies. I’m targeting a conference paper based on the first study findings, and the thesis submission is planned for August 2028.
I’ll update the blog as things progress. The research is very much live and evolving, and vibe-coding itself is changing rapidly as tools improve. One of the interesting methodological challenges of this work is that the thing I’m studying keeps moving under my feet. But that’s also what makes it worth studying: this is a practice that’s reshaping creative technology in real time, and the choices we make about how to design and teach it will matter for a long time.
If you’re a creative practitioner who’s been experimenting with AI coding tools, or if you run a small organisation that’s thought about using them, I’d love to hear from you. My research depends on connecting with people who are navigating this stuff outside of academic settings. You can reach me at contact@chrischowen.com.
I’m a PhD researcher at Royal Holloway, University of London, funded through the UKRI Centre for Doctoral Training in AI for Digital Media Inclusion. This research is part of the UKRI CDT in AI for Digital Media Inclusion, a collaboration between Royal Holloway and the University of Surrey.