I’m a PhD researcher studying how AI tools can help non-technical creatives build real software. My programme is literally called the UKRI Centre for Doctoral Training in AI for Digital Media Inclusion. The mission statement says we’re being trained to “lead the transformation to responsible, AI-enabled inclusive media.”
And yet, neither of the universities hosting my CDT has published clear guidance on whether I can use AI in my own doctoral work.
That’s the contradiction I’ve been sitting with for months. I’m funded to study AI. I’m expected to become an AI expert. My industry partners want me to show them how AI can transform their practice. But the institutional framework governing my actual PhD has essentially nothing to say about whether, how, or to what extent I should be using these tools in the research itself.
I’ve started calling it the “study it but don’t use it” paradox. And I think it matters well beyond my own situation.
The Policy Landscape Is a Mess
I spent a significant amount of time mapping the state of UK university AI policies for doctoral researchers. The picture is not encouraging.
The Russell Group published principles on AI in education back in July 2023. The word “doctoral” doesn’t appear. Neither does “thesis” or “postgraduate research.” The whole thing is framed around taught assessment, basically trying to stop undergraduates using ChatGPT to write essays. Understandable in mid-2023, but we’re now two and a half years on and the gap hasn’t been addressed.
UKRI, the body that actually funds CDT programmes, has AI policy that covers grant applications. Not the research itself. The QAA’s doctoral standards document predates generative AI entirely.
At the institutional level, the variation is wild. King’s College London has detailed guidance that explicitly covers doctoral theses and prohibits examiners from running AI detection on submitted work. The University of Leeds built a traffic-light system. The University of Glasgow said, remarkably clearly, that their guidance “should not impede research which specifically encompasses AI as a subject, tool, or method.”
Meanwhile, my own two institutions? Nothing dedicated to PGR AI use that I could find publicly. The guidance that exists is aimed at undergrad assessment.
The result is that AI use in doctoral research is governed almost entirely by supervisory discretion. Two researchers in the same CDT, studying related topics, could get completely different guidance depending on which supervisors they were assigned. That’s not a policy. That’s a lottery.
Why This Matters More for AI CDT Researchers
Every doctoral researcher in the UK is affected by this policy vacuum. But for those of us in AI-focused CDTs, the contradiction is especially sharp.
Here’s the thing about CDTs: they’re designed to attract people from industry. That’s the whole point. You bring in practitioners with real-world experience and give them the space to do deep research. I spent seven years running an innovation hub at Wired Sussex before starting my PhD. Many of my CDT peers came from similar professional backgrounds.
But industry has moved on. The creative and technology sectors we came from are overwhelmingly shifting to AI-assisted workflows. When I was working in Brighton’s digital creative community, AI tools weren’t controversial. They were just how you got things done. When practitioners like us enter the academy, we hit a culture that hasn’t caught up with the professional reality we just left. CDTs recruit from industry because they value industry experience, but the institutional framework hasn’t adapted to accommodate the AI-integrated practices that now define that experience.
And then there’s the dogfooding comparison. Anthropic reports that roughly 90% of the code for Claude Code was written by Claude Code. Windsurf claims about 95% AI-generated code. In the tech industry, using your own tools isn’t just acceptable, it’s considered essential. Nobody would take an AI company seriously if it refused to use its own products. In academia, we’re studying these exact same tools while navigating institutional cultures that range from indifferent to actively suspicious about using them.
The Epistemological Problem
For practice-based researchers like me, this isn’t just an administrative annoyance. It’s an epistemological problem.
My thesis investigates vibe-coding as a creative design medium. To produce credible practice-based knowledge about AI-assisted creative work, I need to actually do AI-assisted creative work. That’s what practice-based research means. The musician studying improvisation isn’t told to stop improvising. The designer studying participatory design isn’t barred from participating. But the researcher studying AI-assisted creative programming is somehow supposed to produce a thesis without leaning on AI?
The theoretical framing backs this up. Lucy Suchman’s work on situated action shows that human activity is fundamentally context-dependent and can’t be fully captured by pre-specified rules. University AI policies are trying to do exactly what Suchman says doesn’t work: create abstract plans that govern an enormous range of situated practices. Edwin Hutchins’ distributed cognition framework goes further, showing that a person working with a tool forms a cognitive system with properties that neither component has alone. A researcher working with AI isn’t a researcher plus a cheating device. They’re a different kind of cognitive system, one that can produce work neither human nor AI could manage independently.
The Inclusion Angle
There’s a dimension to this that I think is underexplored, and it connects directly to my CDT’s focus on inclusion.
I came to my PhD from a creative technology background, not from academic writing. The conventions of academic prose, the specific register, the citation practices, the structural expectations of a doctoral thesis, these were all unfamiliar territory. And I know I’m not alone. The creative industries are full of people with deep practical expertise in human-technology interaction who have important things to contribute to research. But the traditional academic pathway, with its emphasis on a very particular mode of written expression, filters out many of the people with the most relevant experience.
AI tools can bridge that gap. They can help a practitioner-researcher translate practical knowledge into academic language. They can demystify scholarly conventions. They can make the literature review process less overwhelming for someone encountering academic databases for the first time.
If we’re serious about inclusion in doctoral education, if CDTs with “inclusion” in their name actually mean it, then we should be paying attention to how AI tools can widen participation rather than building frameworks that stigmatise their use. The people these tools empower most are exactly the non-traditional researchers that inclusive programmes should be attracting.
What Should Change
I’m not arguing for a free-for-all. I’m arguing for transparency over prohibition.
The shift needs to be from detection to disclosure. AI detection tools are unreliable (several universities have already banned their use on submitted work), they disadvantage non-native English speakers, and they create an adversarial dynamic. The alternative is straightforward: require transparent disclosure of AI use, normalise that disclosure, and evaluate work on the quality of its intellectual contribution rather than the purity of its production process.
The University of Exeter’s mandatory AI disclosure template for theses is a practical starting point. Policy needs to be methodology-sensitive, recognising that a quantitative researcher using AI for statistical code generation is in a fundamentally different position from a practice-based researcher whose creative medium is AI-assisted.
And AI-focused CDTs should be leading this, not waiting for the rest of the sector to figure it out. These programmes were designed to produce AI leaders. The researchers within them are the best-equipped people to develop, test, and refine frameworks for transparent AI integration in academic work. The absence of such frameworks from the programmes most deeply immersed in AI isn’t just a missed opportunity. It’s an abdication.
Walking the Walk
I want to be transparent about something. This blog post was written with AI assistance. The paper it’s based on was researched and drafted with extensive help from Claude. I made the editorial, analytical, and argumentative decisions. The AI helped me execute them faster and more thoroughly than I could have alone.
I’m disclosing this because the alternative, pretending otherwise, would undermine the very argument I’m making. If the expectation is that I should hide my use of the tools I’m studying, something has gone fundamentally wrong.
As I write this, Jack Dorsey has just cut Block’s workforce nearly in half, explicitly citing AI. Anthropic’s Economic Index says roughly one in two US jobs now has at least 25% of tasks appearing in AI usage data. The world outside the university is not waiting for institutional policy to catch up.
Inside the university, doctoral researchers are navigating a policy vacuum, making individual judgments about AI use with little guidance and strong incentives to stay quiet. For those of us in AI CDTs, the contradiction is especially pointed. We are funded to advance AI. We are trained to understand AI. And the institutional framework governing our work has essentially nothing to say about whether we should use it.
I’ve been working through this argument more formally as part of my doctoral research. The working paper below has the full policy evidence, theoretical framing, and a proposed disclosure framework for practice-based theses. It’s a draft — still evolving — but I’m sharing it early because I think the conversation needs to happen now rather than after another two years of silence.