In the previous posts I described my PhD research into vibe-coding: how AI tools compress the journey from idea to prototype, where that process breaks down, and what non-technical creatives need to know to use these tools well. But I’ve been talking mostly in the abstract, referencing two projects in detail and gesturing at the broader picture. This post is the broader picture.
Over the past year or so, I’ve vibe-coded somewhere north of fifty projects. Not all of them are good. Not all of them are finished. But taken together they represent something I think is worth documenting: what happens when someone with more ideas than technical skill gets access to AI tools that can actually keep up with them.
The Backstory
I’ve always had this problem where I can imagine things faster than I can build them. Before AI coding tools, that meant most of my ideas died somewhere between “this would be amazing” and “I don’t know how to make React do that.” I knew enough to be dangerous but not enough to be productive. A decent generalist who could start everything and finish almost nothing.
GPT-4 changed that in early 2023, and then things accelerated further when I started using Claude. Suddenly the gap between describing what I wanted and having something functional shrank from weeks to hours. And instead of doing the sensible thing and focusing on one project at a time, I did what any reasonable person would do: I built everything I’d ever wanted to build, all at once.
The Laughter Saga
The best example of how vibe-coding changes what’s possible is what I’ve started calling the laughter saga. It began with a simple question: could you build a Try-Not-To-Laugh game that actually watches your face?
The first step was boring but necessary. I built a dataset pipeline that downloads and segments laughter audio clips from YouTube using AudioSet metadata, resampling everything to 16 kHz mono. Then I trained a ResNet18-based CNN on mel spectrograms to classify laughter versus not-laughter. It hit an F1 score of 76.2%, which isn’t going to win any competitions but is more than enough to know when someone cracks.
From there it spiralled. I exported the model to ONNX for browser inference. Built a Flask API with a web frontend for real-time microphone-to-prediction. Rewrote a 2019 sound event detection system to run on Apple Silicon at 60+ FPS. Each version taught me something the previous one hadn’t.
Then came the game iterations. Laughter3 was a Vue.js prototype combining face detection with video playback. Laughmachine got more ambitious: adaptive difficulty using contextual bandits to learn what makes each individual player crack, HP bars, segment-level heatmaps, all running at 15 Hz on the webcam with no frames leaving the device. The party version added Jackbox-style lobbies where players join via codes or QR, submit YouTube clips, and watch them together while the system monitors everyone’s face.
PublicLaugh was the most polished iteration: a competitive platform with continuous Elo ratings for both players and video clips. A bandit algorithm selects clips intelligently, WebSocket sync keeps everyone in lockstep, and dual leaderboards track the most composed players alongside the most effective joke-makers.
And then TNTL 2.0 emerged as the grand unification of everything I’d learned. Survival modelling instead of binary pass/fail. Hazard functions and survival curves. A composure index that works like an Elo for poker faces. A hybrid recommender combining collaborative filtering with content embeddings from Whisper, CLIP, and YAMNet. Four game modes. A 560-line engineering spec.
Seven projects deep into the same idea, each one more sophisticated than the last. A year ago, the first version would have been the only version. I wouldn’t have had the technical endurance to iterate that many times. Vibe-coding didn’t just let me build the thing. It let me build the thing seven times, learning something new each time.
Games and Prediction Engines
The laughter projects weren’t the only rabbit hole. My League of Legends obsession produced a scraper for champion data, a draft assistant that combines counter-matchup and duo-synergy winrate data to recommend optimal picks, and a match prediction engine trained on 11 years of pro match data. The prediction model uses XGBoost with strictly pre-match rolling averages to prevent data leakage, which is the kind of methodological detail I wouldn’t have known to care about before this research taught me to question everything the AI produces.
I also built an Among Us analytics dashboard tracking 57 players across Sidemen gaming sessions with a custom Elo rating system, and a World of Warcraft addon for The Burning Crusade 20th Anniversary that analyses your raid composition and tells you who to recruit next. That one’s in Lua, which I’d never written before. The AI handled the language difference without breaking a sweat.
Then there’s Chain Reaction, which might be my favourite small project. It’s a semantic word-chain puzzle game powered by neural word embeddings. You get a start word and an end word with hidden steps between them, and you guess the intermediate words by reasoning about meaning. Geodesic interpolation in embedding space picks the path. Wordle meets word2vec. The whole thing runs on Ollama with nomic-embed-text and took an evening.
Fantasy Sports as Operations Research
The Fantasy Premier League work deserves its own mention because it represents the most technically ambitious thing I’ve built. The first version treats FPL as a Multi-Period Stochastic Knapsack Problem. Bottom-up ML models predict expected points, then Mixed-Integer Linear Programming optimises squad selection across a rolling horizon. It includes dynamic Fixture Difficulty Ratings, rotation matrix analysis for defensive pairs, and chip strategy planning.
The second version goes further: a full reinforcement-learning agent with four training phases that layer up from gradient boosting baselines through LSTM and Graph Neural Networks to a PPO policy trained via Ray RLlib. The gym environment is written in Rust for speed.
And then, because apparently I can’t leave well enough alone, I built Fantasy Parliament. The same concept as Fantasy Premier League but for the UK Parliament. Build a cabinet of 15 MPs within a budget, assign them to ministerial positions, and earn points based on real parliamentary activity sourced from Hansard. Democracy as a spectator sport.
Creative Tools
Some projects were more directly creative. The autostereogram generator takes any 2D photo and uses neural depth estimation to produce a genuine Magic Eye image, optimised for Apple Silicon. Its more serious sibling converts any 2D photo or video into stereoscopic 3D for VR headsets or red-cyan anaglyphs, with a 30-40% quality improvement in the MVP alone.
The movie recap generator is a ten-step pipeline that takes a raw movie file and produces a YouTube-ready recap with AI narration, scene detection, character identification, and narrative arc analysis. Choose your narrator: YouTube critic, film noir, or comedy channel. Recaps is probably the project that best illustrates the compression gap I keep writing about. The pipeline works. It produces output. Whether that output is actually good enough to publish is a different question, and one the AI can’t answer for you.
The creative-router project tried something more architectural: a Mac-first AI orchestration system that converts natural language requests into typed, cache-aware directed acyclic graphs and executes them across local or cloud AI models. Describe “turn a podcast into a storybook” and it decomposes it into model calls, runs them via Prefect, and embeds provenance metadata in every output.
Community Platforms
Several projects emerged from my creative technology background in Brighton. Brighton Fuse reimagines the city’s creative and tech community hub as a virtual-first, AI-enabled platform. Wired Sussex is an AI-native tech community platform with a jobs board, commissions marketplace, and an AI careers advisor. Both are attempts to solve the same problem: how do you sustain creative community infrastructure when physical spaces keep closing?
Artspace, the consortium-builder I wrote about in an earlier post, came from this same impulse. So did the Storytelling Innovation platform, which provides a narrative AI environment for collaborative storytelling and research with a knowledge commons, project studio, and ethics centre.
The Weird Stuff
PosthumousWorld is an immersive scroll-based website exploring death, decomposition, and ecology through a 5-year art project. Six narrative sections, parallax effects, SVG mycelial network patterns, film-grain overlays, and a colour palette of soil-black, fungal-green, and bone-white. It’s beautiful and unsettling.
The dating recommendation system uses a two-tower neural architecture with dual embeddings, contrastive training, and Thompson Sampling for exploration. The maths of love, built in an evening.
The AR business card lets you scan a physical card with your phone and a 3D avatar of me appears, ready for a real-time voice conversation. One version runs on Google Gemini, the other on ElevenLabs. It’s a networking party trick that I’m unreasonably proud of.
And ChrisOS is a planned personal portfolio website disguised as a macOS desktop. Navigate projects through Finder windows, book meetings via a Calendar app, video-call a conversational AI version of me through FaceTime, and search everything with Spotlight. It’s the most “me” website imaginable, and the fact that I can even consider building something that ambitious tells you something about where these tools have brought us.
Research Tools
A few projects serve the PhD directly. The AIStudio Logger is a Chrome extension that captures high-fidelity trace logs of user sessions in Google AI Studio by intercepting fetch calls and polling Monaco Editor models every 3 seconds. Zero data leaves the browser. It’s how I plan to study what participants actually do during vibe-coding sessions.
ClaudeCreative is a Claude Code plugin implementing seven design interventions from my research: visible automation, minimal diffs, reflection pauses, conservative defaults, verification support, provenance preservation, and embedded literacy. It’s the research made tangible, a first attempt at the kind of scaffolding I described in the previous post.
What This All Means
Looking at this list, a few things stand out. First, the sheer range. Machine learning, game development, community platforms, creative tools, browser extensions, a WoW addon, arboricultural surveying software. A year ago I wouldn’t have attempted half of these because the technical barriers would have stopped me before I started.
Second, the iteration depth. The laughter saga went through seven versions. The FPL engine went through two complete architectural rewrites. Vibe-coding didn’t just let me build more things. It let me build things more times, which turns out to be where the real learning happens.
Third, and this connects directly to my research: every single one of these projects hit the compression gap I keep writing about. The AI would get me to a functional prototype fast, sometimes in a single session. But the distance from prototype to something I’d actually put in front of other people was always longer than expected, and always required understanding that no amount of prompting could replace. Configuration. Permissions. Data boundaries. Security. The invisible infrastructure that working software depends on.
I’m not listing all of this to show off. I’m listing it because this is what the evidence looks like when you study vibe-coding from the inside. These projects are my dataset. The patterns that emerge from building this many things this quickly are what inform the failure modes, the literacy framework, and the intervention tools I’m designing for my PhD.
If vibe-coding is going to be useful for non-technical creatives, and I believe it genuinely can be, we need to understand it through practice, not just theory. This is what that practice looks like: messy, prolific, occasionally brilliant, frequently broken, and always teaching you something you didn’t expect to learn.
I’m a PhD researcher at Royal Holloway, University of London, funded through the UKRI Centre for Doctoral Training in AI for Digital Media Inclusion. You can find me at chrischowen.com or reach me at contact@chrischowen.com.