
I am a sound and electronic media artist working with sound sculpture, DIY electronics and human-machine systems to create performances and sound objects. I like to explore the sonic materiality of our technologies and consider our agency within seemingly autonomous systems.
I’d been interested in creative AI for some time, having undertaken a course on the technical history of AI at the University of Karlsruhe and a research trip to St Petersburg and Moscow as part of the British Council UK-Russia Year of Music, where I met musicians and technologists working within this emerging field of sonic AI.
I was aware of tools for pattern generation regarding midi, but it was here that I first heard audio outputs produced via neural synthesis algorithms and became interested in applying this approach to my work with materials and musique concrete – sparking questions such as ‘if a machine can generate new sonic materialities, what happens to the authentic matter when processed by a model’?
Approaching NOVARS and PRiSM on my return to the UK was a natural progression to test these questions, using the code prism-samplernn, with the research provocation “How can concrete materials and neural networks project future sonic realities?”
My practice involves exploring materials through sampling techniques with recorded or archival sound and sculpture – I’m fascinated by acousmatic sound drawing heavily on the transformational nature of musique concrete. This manipulation of recorded sound, makes us consider the very texture of the material itself, its potential and wider questions around sonic atoms, form and matter. The residency builds on my previous research project MATERIALITY, where I built a live system focused around industrial musique concrete, amplified metal sound sculptures, drum machines, samplers and a conductive gestural interface made from graphene, built through collaboration with the National Graphene Institute. My key areas of critical exploration are as follows:
DEFINING SONIC MACHINE LEARNING_Considering the technological lineage of tape experiments of Schaeffer and GRM and the Radiophonic workshop, could prism-samplernn be a new electronic tool for concrete music, utilising algorithmical and statistical manipulation of recorded material instead of tape. And… just what is this technology? Not simply a tool, but a sound generation source, new method of sampling and/or potential collaborator? Is neural synthesis a new tool for contemporary Musique Concrete?
THE SOUND OBJECT: SONIC MATERIAL WITHOUT CONTEXT_Thinking about abstraction and acousmatic sound – concrete music removes the context and origin of sound matter, radically questioning conceptions of what can be deemed music. A machine learning model has by default no understanding of context or beauty, it doesn’t know it is ‘listening to glass’ and is therefore aesthetically unbiased. If the generated audio from a machine is acousmatic by nature – what sounds will it produce and will we deem these beautiful or even musical? What exactly does this sound like?
ARTISTIC AUTONOMY_Where is the hand of the artist within the system of neural synthesis? Which parts of the system do we have autonomy, influence and input? What does this mean for technological collaboration, how do we compose and perform live with this material?
TECHNOLOGY | ACCESSIBILITY_Machine Learning is an incredibly complex field, known for it’s ‘black box’ and ‘hidden’ multiple layers of neurons. As an artist how can I begin to attempt to understand these systems, share process and demystify the tools and platforms so others can viably use these for music making?
SAMPLING_ What does this technology mean for sampling and what can the machine make that I can’t?
TALK AT IN SONIC ZKM
At the very start of the residency in December 2020 I did a short talk for inSonic festival at ZKM outlining my framework for the residency.
One thought on “RESIDENCY FRAMEWORK: WHY MACHINE LEARNING & MUSIQUE CONCRETE?”