Thursday 01.12.2011 10:00

Presentation and improvisation with Victoria Johnson, Malmö, Sweden

Presentation and performance at the international symposium (Re)Thinking Improvisation by Victoria Johnson (electric violin) and Diemo Schwarz (CataRT live corpus-based concatenative synthesis), at the Inter Arts Center, Lund University, Malmö, Sweden.

Live corpus-based concatenative synthesis permits a new approach to improvisation, where sound from an instrument is recontextualised by interactive, gesture-controlled software, creating a symbiosis between the two performers. The impredictability of the sound produced during the improvisation is always part of the challenge. Not to know what can happen is an integral part of the performance.

Since its beginnings at the Live Algorithms for Music Conference in 2005 with Evan Parker and George Lewis, the concept has been developed further in encounters with Pierre Alexandre Tremblay (Canada) on bass, Etienne Brunet (France) on bass clarinet, Nicolas Mahieu (France) on double bass, Victoria Johnson (Norway) on electric violin, Jacques Pochat (France) on saxophone, Luka Juhart (Slovenia) on accordeon, Pedro Rebelo (UK) on piano, Antonio Aguiar (Portugal) on double bass, and Frederic Blondy (France) playing prepared piano.

In the most recent development, Diemo Schwarz on laptop re-conquers expressivity by using gestural controllers such as a pressure-sensitive xy-pad, and piezo pickups on various surfaces that allow to hit, scratch, and strum the corpus of sound, exploiting all its nuances. This creates a gestural analogy to the violin playing.

Victoria Johnson on electric violin treats her instrument in a sonic unity with effects pedals and digital audio effects to achieve an extension of her instrument in the electronic realm that is completely integrated with her unique playing style.
The improvisation in this setting has been performed in concerts of Victoria Johnson with Diemo Schwarz in Bergen Kunsthall 2009, and in the club Sound of Mu in Oslo, Norway, in 2011.

Artistic Concept

The performance by an instrument player and a laptop musician playing his real-time corpus-based concatenative synthesis software CataRT1 is an improvisation with two brains and four hands controlling one shared symbolic instrument, the sound space, built-up from nothing and nourished in unplanned ways by the sound of the in- strument, explored and consumed with whatever the live instant filled it with. It creates a symbiotic relationship between the player of the instrument and that of the software.

CataRT behaves here like a poetic metaphor, a generator of a cut-up of emotions, an interactive structure where the sonic personae exchange bodies and the score of their minds is rewritten in the instant. Their physical sonic appearance, either familiar or unknown, is modified. But this is not the dream of the machine taking control of the human, it is the contrary:
This software creates a kind of dialectic between the age-old gesture of the musician and its digital transformation and critique. CataRT is the pathway between the acoustic instrument and a recontextualising synthetic interaction at the heart of our times. In a poetic manner, the discourse of the instrument player can be resculptured in the instant by the player of the software and parter of the instrumentalist, making the unpredictability of the incoming material an integral part of the live performance.

Technical Concept

The performance uses the concept of live interactive corpus-based synthesis. Starting from an empty corpus, the CataRT software builds up a database of the sound played live by segmenting the instrument sound into notes and short phrases and analysing it for a number of sound descriptors, which describe their sonic characteristics. The performer then re-combines the sound events into new harmonic, melodic and timbral structures, simultaneously proposing novel combinations and evolutions of the source material according to proximity to a target position in the descriptor space that he controls. The metaphor for composition is here an explorative navigation through the ever-changing sonic landscape of the corpus being built-up from live recording.
Technically, CataRT splits the incoming sound stream (or any number of prerecorded sound files) into short segments, and analyses each segment for a number of sound descriptors such as pitch, loudness, brilliance, noisiness, spectral shape, etc., or higher level descriptors attributed to them. These sound units are then stored in a database (the corpus). For synthesis, units are selected from the database that are closest to given target values for some of the descriptors. The rate and target values of the selection are typically controlled by a 2D representation of the corpus, where each unit is a point that takes up a place according to its sonic character. Other control possibilities are external controllers or analysis of live audio input. The selected units are then concatenated and played, after possibly some transformations. Note that corpus-based concatenative synthesis
can also be seen as a content-based extension to granular synthesis providing direct access to grains with specific sound characteristics in real-time, thus surpassing its limited selection possibilities, where the only control is position in one single sound file.

Comments are closed

You must be logged in to post a comment.