DEVELOPING A SYSTEM FOR_
ELECTRONICS | AI | SOUND SCULPTURE
Following the online AV piece for Future Music #3 I wanted to extend working with the material to develop a system for performing the piece live. Some early sonic questions were
- How to work with the PRiSM-SampleRNN machine learning generated material in real time?
- How can I take a fixed media composition and transform this for live to include gesture and live materials?
- What performance system would allow for a combination of live sound objects, electronics and neural/AI materials?
NARRATIVE_ To take the work into a live context, I began to consider what system I would need to develop the AURA MACHINE piece. One starting point was to work from the narrative flow of the piece. Starting with raw sound / concrete materials from the training dataset of glass, wood, noise and metal – how can I play these live? I used contact microphones to amplify glass and metal objects, processing these through guitar pedals and into the modular setup. I am using Ableton Live as the brain of the set, moving through narrative changes, working with a DAW at this point has been crucial as when working with neural synthesis materials you are essentially working with lots of generated samples, (hard to do neural synthesis in real time as this require training time.) Below is the diagram showing the sketched flow for the piece from concrete materials – transmutation in latent space – generated AI output.

NEURAL MATERIALS = LO-FI PRODUCTION

Producing the materials is a difficult task, with each sample requiring individual equalisation filtering and compression, this is due to my material having a vastly dynamic range and many peaks. The lo-fi ness of the material is also a concern, how much should you refine or change the material, for the piece I kept it as true to raw output as possible as I want to share the sounds the model produces, the actual ML materiality is for me the most interesting. I used the ML sounds to create layered tonal sections, a range of drums and attacks, also as noise textures to blend with FM synth pads for example. Here is a couple of example of the raw ML output from different epochs.
MODULAR FOR MUSIQUE CONCRETE – LIVE-NESS

The starting point for the AI modular setup was the Morphagene Module from Make Noise. This takes live inputs and prerecorded material and scans through the content as micro sounds using a granular approach to uncover textures and materialities within the samples. This module was used to reinterpret and expand the machine learning output, finding more patterns, timbres and essences in the sound which I could then perform and manipulate these for live. This module was used for the ‘machine wind’ section creating pitched sweeps and volume automation with VCA’s. Reframing and playing the AI sounds as live instruments is really key to this performance having an improvisational feel. The next additions to the set up will involve the integration of the live sound sculptures into the system for modulation, control and acoustic amplification. New modules include Ears from Mutable Instruments for live contact mic input and filtering from Ikarie from Bastl Instruments.
GESTURE & LIVE SOUND SCULPTURE_ I’m incorporating my Graphene Performance Interface into the live set to add gesture and expression when playing with the materials and plan to expand and develop an interactive system for sound sculpture – AI and modular to explore future possibilities for the performance of this work.