Notes, quotes and things to come back to.
Here, I am again building on the (elaborate) affordances of behaviour (Gibson, 1977, 76) by using generativity. The glitch sounds heard prominently at the beginning and end of this video are processed guitar samples triggering at random using Ableton’s ‘Follow Actions’ function for clip playback. This can be used to provide a framing, or to set the stylistic scene for the improvisation. Different clips of a variety of sounds could be loaded to provide a choice of impetus for improvisation.
I recalled hearing Amy Brandon talk about her chain-based electronic processing practice in Ableton, in which an input is sent to an effects chain, which feeds another chain, which feeds another, and so on, for an extended period of time. This seemed a very efficient way to generate semi-indeterminate sound to improvise with, so with this system I attempted something similar, albeit on a much smaller scale.
Although this system is clearly causal, I am interested in the cycle in which I produce sound which is then processed and comes back to my ears before I respond (see Butler’s “stepping outside the sound” (2014, 106)). This is something that I can feel very clearly: the great surges of sound that are audible in the legato section (08:37 onwards) definitely feel matched to the effort I have just put into the preceding guitar gesture. The feeling of receiving (in terms of sound) exactly the energy that I am putting in is a sensation that I remember largely informed my decision to choose the guitar that I am playing here over several others that I tried.
I am wanting now to think of the system as an entity with its own agency. While this system does generate sound, it does so only by way of processing the input from my guitar. There is some modulation of effect parameters by means of envelope following, but these mappings do not change during the course of this improvisation and respond consistently (I play louder, and then an effects is heard or something changes). The system therefore behaves passively, risking the explicit causal situation which Croft labels “lamentable” (2007, 61). It is not demonstrating its own agency.
Ben-Tal (2003) sees interactive electronic pieces as complex systems, in which the interaction between performer and computer is part of several interlocking sets of relationships. “Central…is a focus on the way information – broadly conceived – is transmitted and transformed within the system” (2003, 161) This is moving towards the interdependence that I am thinking of in the working title of this current project, which is an enactive way of thinking, where musical meaning arises from the interaction between myself and the system, within the system, sonic feedback mediated through the environment…). Human and machine listening and interacting to each other is what Ben-Tal calls a ‘mutual listening scenario’ (2023, 125-126).
Some further quotes:
“a machine could be developed as a you, (a second person, or social agent) with whom the subject interacts and communicates” Leman 2007, 137-138
A “musical environment” establishes “interaction between a human agent and a technological agent (Leman 2007, 174). “The technology no longer forwards the musical actions of a performer. Instead, it interprets and generates actions on its own. Accordingly, the mediation is no longer based on a one-way transmission of information, but on dialogue between humans and machines. In this concept, packages of physical energy, transmitted between humans and machines, form the primitives on top of which humans can develop communication patterns at the intentional level.” (Leman 2007, 174).




