The Pivot: Listening to the Data (The Aha Moment)
Graphs from Sam’s early research on wave analysis.
We had the hardware—Sam’s hacked lie detector. We had the musical concept—Alex’s beautiful synthesizer loops. The installation for the Philadelphia Museum of Art was coming together. But when I actually sat down to engineer the bridge between the two in Ableton Live, I hit a wall.
The Loop Problem
The original plan was for the plants to act as selectors. Imagine a jukebox where the plant presses the buttons: if the plant’s electrical signal hit a certain threshold, it would trigger "Loop A." If it shifted, it would trigger "Loop B."
Musically, Alex’s loops were excellent. Conceptually, the system worked. But as I watched the raw data streaming in from Sam’s sensor, I realized we were missing the point. The data stream wasn't a series of static states; it was a river. It was incredibly rich, filled with micro-fluctuations and constant, subtle activity.
By forcing the plant to just select pre-recorded loops, we were essentially asking it to speak, but only allowing it to choose from a few pre-written sentences. Between every loop trigger, there was a wealth of biological activity happening that we simply weren't hearing. We were flattening the curve of life into a binary switch.
Building the Signal Chain
Based on the sample MIDI I received from Sam, it was clear we needed to radically re-assess our approach.
I realized I didn't want to hear the plant trigger our music. I wanted to hear the plant control the synthesizer directly.
I scrapped the loop-triggering system and spent a few days building a completely new signal chain in Ableton Live. This was a generative music system designed to sonify the raw data in real-time. Instead of "play this clip," the data was mapped to musical parameters: pitch, rhythm, timbre, and effects.
It became a four-channel installation where each plant had its own instrument. I set up the boundaries—limiting the instruments to specific pentatonic scales across three octaves so it would remain musical—but the texture and expression were entirely up to the plant. The plant’s activity level could modulate the arpeggiation rate, change the wet/dry mix on a reverb, or shift the style of the sound itself. Even with just 15 allowable notes, the control messages allowed for 128 different styles of sound, making the output infinitely dynamic.
The Category Creation Moment
The difference was immediate and undeniable. With the old loop system, you could touch a plant and might have to wait ten seconds for the phrase to finish before hearing a change. With this new generative chain, the response was instantaneous.
We had stumbled upon something continuously generative and novel. It was no longer repetitive; it was alive. You could hear the difference between times of day, or how the plant reacted to changes in the room.
This was the true "Aha" moment. We weren't just making an art installation anymore; we had created a new way of representing biological data as sound. This specific signal process—taking the raw richness of the galvanic impedance and mapping it to real-time generative synthesis—became the foundation of everything we would build for the next decade. It was the birth of the "biodata sonification DSP" that powers PlantWave today.
We had stopped trying to make the plants play for us, and finally built a way to listen to them.
