← All work

SHIFT HAPPENS — Live AI Art

Built a live AI experiment for the SHIFT HAPPENS open mic night where spoken words were transformed into images in real-time, creating a visual conversation between performer and machine.

A performer at the SHIFT HAPPENS open mic night with AI-generated visuals flowing from their spoken words in real-time
A performer at the SHIFT HAPPENS open mic night with AI-generated visuals flowing from their spoken words in real-time
Problem

How do you use AI in a live performance context that amplifies rather than replaces the human creative act?

Results
  • Real-time spoken-word-to-image AI system
  • Deployed at SHIFT HAPPENS open mic, Phoenix Art Space, September 2023
  • Part of Community Takeover exhibition (Sep 2-17, 2023)

SHIFT HAPPENS was a project created by artists Sarah Cole and Annis Joslin in partnership with Brighton Women’s Centre — a safe, experimental space for making, sharing, and reflecting on ideas and feelings. For the open mic night on September 7th, I built an AI system where words became images as they were spoken.

The Technical Setup

The system listened to live performances — music, spoken word, stories, poetry — and generated images in response to what was being said, in real-time. The aim was to create a visual conversation between the performer and the machine, where the AI’s interpretation became part of the performance itself rather than a separate output.

The Context

The open mic was part of the broader Community Takeover exhibition at Phoenix Art Space, running September 2-17, 2023. SHIFT HAPPENS was made by twelve people who met regularly throughout the year, trusting a creative process and each other. The work created at SHIFT included monoprints, large-scale and collaborative drawings, 360 video, performance, animation, and playing with objects and words.

What I Took Away

This was a useful test case for AI in live creative contexts. The technology worked best when it was responsive rather than prescriptive — following the performer’s lead rather than trying to direct the experience. It reinforced something I keep finding: the most interesting AI applications are the ones that amplify human expression rather than attempting to replace it.