Sympoietics: DIGM 5020/6020 Vertical Studio Lab Final Showcase

Program Notes:

Ian Heyward & Gianluca Sabatini –  Creatures in the Machine

About: This installation provides participants the opportunity to experience a realm of creatures who exist through the medium of sound. By performing physical gestures in the machine’s camera, participants can influence the sounds that can be heard in different ways. By using a pre-trained neural network, the creatures will respond to movements in real time and also habituate to movement patterns as time goes on. Together, they are trained by machine learning and user action, creating a dynamic symphony of sounds that the participant can experience as part of the system.

Kushi Jetley – Nadi: The Body Knows

 
About: Nadi: The Body Knows is an interactive biosensor installation mapping the performer’s heart rate to Hindustani classical ragas in real time. Nadi, Sanskrit for pulse, river, and channel, names the invisible current connecting body to sound. The system acts as an autonomous agent maintaining its own mood state, temporal preferences, and transition probabilities that resist immediate physiological change. The machine listens, but it also remembers, hesitates, and decides. Three ragas, Bhairav, Bhimpalasi, and Bhupali, each carrying a time of day and emotional register, emerge and dissolve in response to the living body. Nadi inverts rasa theory: instead of music inducing states, physiology selects its own sonic environment. The body becomes the instrument while the machine becomes a collaborator; neither fully controling the other.

Vladimir Kanic – Photosynthetic Cyborgs

About: Photosynthetic Cyborgs explores interspecies communication through a non-human language co-created by algae and AI. Participants speak through gesture before living bioreactors; cameras track the body and hands as patterns of light, and machine learning translates these movements into sequenced bubble patterns rippling through the cultures. The exchange runs both ways, and biosensors feed the algae’s metabolic rhythms back into the system, so that photosynthetic activity shapes its own voice. Humans also exhale CO2; algae breathe it in and answer. When algal culture is consuming carbon, its responses become longer, more animated. What emerges between these two breathing bodies is a sympoietic exchange, an interspecies improvisation where agency belongs to no single entity and intelligence is distributed across bodies, machines, and living matter.

Philip Michalowski – The Follower: AI Human Co-Creation through Tactile Display

About: The Follower explores human and AI co-creational agency through soft embroidery interaction. Touch, always combined with gesture, creates a bonding between human and AI within the system, leading to gestural and sonic co-creation. The work presents an expressive form of switched co-creational leadership, offering an experience where human and AI participate on equal terms. Sonic and tactile embodiment shows AI’s presence and collaborative capacity. That presence and those reactions create a provocation to underline equality in creation as well as the switch in leader and follower roles across all AI use.

Xingbang Tang – Sonic Lenia: Living Patterns

About: Lenia, a continuous cellular automata system, has revealed a wide variety of life-like entities. In this work, each organism is given a distinct sonic identity, an artificial “cry” reminiscent of Pokémon, deepening its presence and individuality. Through a microphone, participants can interact with these entities in real time, influencing their behavior: causing them to die, explode, or transform their motion. Certain species exhibit unique responses to sound, forming intricate patterns when exposed to specific audio inputs. During the exhibition, a musical sequence will be performed to reveal Lenia’s expressive responses to sound.

Kesha Upadhyay –  Where Gesture Becomes Sound: Drawing as an Instrument

About:  Where Gesture Becomes Sound: Drawing as an Instrument is an interactive audio-visual system that transforms drawing into an immersive musical performance. Real-time mouse or gesture input shapes sound: faster strokes accelerate playback, slower ones stretch and distort it. “Play mode” replays accumulated gestures as dynamic compositions, while clearing resets visuals and audio. Drawing becomes both a visual trace and a performative instrument, linking gesture, memory, and sound. Each movement leaves a visual mark and evolving sonic texture. The system translates motion into expressive audio, layering past gestures as echoes of the past, allowing a brand new creation to start with each cleared canvas.

Jingwen Zhang – Flutecoma: An Ethereal Symbiosis of Breath and Purr

About: Flutecoma is an immersive performance system that blurs the boundaries between the organic and the algorithmic. By weaving together a delicate tapestry of feline vocalizations, the meditative breath of the Shakuhachi and flute, and the soaring clarity of human whistling, the system creates a soundscape that feels both ancient and futuristic. Powered by the FluCoMa (Fluid Corpus Manipulation) toolkit, the system acts as a sentient collaborator, using real-time machine learning and stochastic control to evolve the sound spontaneously. The result is a “living” composition—a naturalistic, ethereal experience that dwells in the hollow space between intention and randomness.