Holding Time is a public art project, and for their Ablaze in Bradford exhibition I built an AI-powered evaluation tool designed to go beyond traditional feedback forms.
The Tool
The system captured visitor responses and used AI to interpret and synthesise the feedback, creating a richer picture of how people were experiencing the exhibition than a simple rating scale could provide. The processed insights were fed back to the project website, creating a loop between visitor experience and the project’s ongoing narrative.
Why This Approach
Standard evaluation in arts and culture tends to reduce complex experiences to numbers. The brief here was to build something that could handle the nuance of how people actually respond to art — the unexpected connections, the emotional responses, the things that are hard to tick-box. AI was well suited to this because it could process free-text responses at scale while preserving the specificity of individual reactions.
What It Demonstrated
This project sits at an interesting intersection: applied AI that isn’t about generating content or automating tasks, but about understanding human experience. It showed that AI evaluation tools can work in cultural contexts where the goal is insight rather than efficiency.