ISOLATING THE MACHINE SOUND OBJECT_
PRiSMSampleRNN trained on materials in my Concrete training dataset. I employed two methods for sound generation using this model, firstly training took place on the supercomputers at RNCM supported by PRiSM Research Software Engineer Dr Christopher Melen for five days and secondly I undertook cloud based training experiments at home using Google Colab notebook. I started working with the ML generated material. What was I listening for? Errors, unique ‘blends’ of material, events and tones; anything unexpected in terms of new materiality, some alchemy previously unheard. What would the machine recognise in the data that I couldn’t. The listening process involved going through audio from many epochs of training. As sole creator of the dataset I knew in intricate detail the sound objects comprising the input, which meant that as a listener I was automatically prone to try to identify particular sounds, so there was an effort to retain impartiality in order to identify these new forms and let this new neural material speak for itself.
Notes on a Concrete music, Pierre Schaeffer
‘But the miracle of concrete music is that… things begin to speak by themselves, as if they were bringing a message from a world unknown to us and outside us’
I treated the material as I would approach working with any audio content, entering a process of listening, analysing and being guided by what the material suggests tonally, rhythmically or in terms of events. This final process of human classification and audio fragment extraction again mirrored some processes of the ML model itself, looking for patterns and common features to ‘make sense’ of the audio data, categorising and classify sounds such as ‘data falling downstairs’ (extremely harsh extended metallic scraping sounds) and ‘machine wind’ (whistling and ambient tonal atmospheres.)
There was a distinct air of the uncanny working with this material and within this compositional headspace, as I kept reminding myself that this new material, being both sound source and collaborator was not the sounds of objects, or sited in a recording; but a statistical prediction and machine generated sound object. Around this time I was reading about early alchemical experiments and found there to be many parallels and analogies between the ideas of transference of state, transmutation of matter and the hidden layers of multiple neural networks. In the above clip, my first ML sample, you can hear some of this machine alchemy creating textures I didn’t expect or could create as a human. Here I can determine some origin sounds of crushed glass and electromagnetic noise, plus the attacks of tools and the suggestion of imagined machines switching on and off. The way in which the events occur and textures morph in and out of each other create new forms and improvisational architectures. I have been impressed with the articulation of the very quiet audio too as the model detected delicate handling noise from the field recorder and EMF detectors from the original recording so there is some way the ghost hand of the recordist present.
CATEGORISING & COMPOSING_
How to start? My process to begin working with the mountains of noisy AI material was to create a classification system. Listening back through the samples from each epoch, I sampled and extracted sounds around the following types; events, textures, one shots, new materialities, and atmospheres, again experiencing this feeling of mirroring machine learning processes of labelling, classifying and trying to recognise both commonalities and differentials. I was keen to explore what the innate properties of the materials would suggest, rather than approach the material with any prior compositional ideas.
“Concrete music will liberate itself from these modes of attack, which we call natural and replace them with more complex modes”Notes on a Concrete Music, Pierre Schaeffer
I listened closely in short bursts of time as the material was fairly dense, scratchy and with a high level of peaks, wide dynamic range and harsh metallic noise content which was tough on the ears for prolonged periods. I began working with the sounds again using musique concrete techniques to extract the aura of materials, considering the body (or ‘allure’) of a note, deleting attacks and developing the inner spectra overtime and through modulation. Using small fragments of sound and expanding them to create instruments from these abstract sonic-machine materialities, letting them play out, speak and suggest new atmospheres for themselves.
LOFI – AURA?
These are some of the techniques I used to build audio for my first neural synthesis piece, a 10 minute AV piece for Future Music #3 at Royal Northern College of Music. One aim of the piece was to see if by having trained a neural network on recordings of the aura’s of mill spaces and materials that contain historic, locative and emotional charge and meaning….can a machine produce it’s own sonic aura ? I would say yes. I was enchanted by the lo-fi sonic world of the output material, I felt it had a distinct almost analogue warmth and depth ironically reminiscent of early musique concrete tape recordings. This ‘lofi-ness’ is due to the low sample rate (16K) required for training the model and can be a challenge – I however decided to embrace the feel of this for the piece. Having just begun working with this material, I intend to undertake more experiments and analysis but at this stage I know that the aura of the soundworld and my connection to the material has captivated my sense of the electrical imaginary.
The process definitely created new artefacts, blended material atmospheres and improvised collaged architectures which for myself pose exciting questions for composition. These features were not pale imitations or reproductions of the training data but held their own space and authenticity as new object forms and projections.