Tuesday 02.08.2011 22:30

Outprovisation — Improvisation with Live-Recorded Environmental Sound at ICMC, Huddersfield

ICMC 2011 Late Night Concert 2, Students’ Union, Huddersfield, UK

Outprovisation breaks the tyranny of the instant and fragments the unity of time and place, by improvising with unpredictable live streams from the vicinity of the concert venue: the bar, a park, the street, a shopping mall…
It enlarges at the same time the temporal horizon, the loci of presence, and the sonic space, doubling it by the abstract space of transformed sounds.

Drawing on compositional models proposed by John Cage, Luc Ferrari, and Sam Britton, it is possible, through a process of re-composition to record environmental sounds and interpret and contextualise them into a musical framework. The piece demonstrates the possibilities of such an approach by using corpus based concatenative synthesis as a tool for real-time analysis and catagorisation of recorded sound whilst simultaneously affecting a process of re-synthesis using specific compositional strategies and techniques.

The performance starts with nothing at all and solely by recording and re-composing environmental sound, evolves a musical structure by tracing a non-linear path through the increasing corpus of recorded sound and thereby orchestrates a counter-point to our own linear perception of time.
The aim is to construct a compositional framework from any given source material that may be interpreted as being musical by virtue of the fact that it’s parts have been intelligently re-arranged according to specific sonic and temporal criteria.

Technical Notes

Four microphones are placed at various sites around the concert venue, chosen beforehand by the artist for their sonic quality and variety, and their possibility to react to the performance, e.g.:

  • bar of the concert venue
  • street outside the concert hall
  • park with birds
  • shopping mall
  • a university building’s rooftop
  • the library
  • railway station/tracks

The laptop player re-conquers expressivity by using gestural controllers such as a pressure-sensitive XY-pad, and piezo pickups on various surfaces that allow to hit, scratch, and strum the corpus of sound, exploiting all its nuances.

The performance uses the concept of live interactive corpus-based synthesis [1]. Starting from an empty corpus, the CataRT software [2] builds up a database of the sound recorded live by segmenting the input sound and analysing it for a number of sound descriptors, which describe their sonic characteristics.
The performer then re-combines the sound events into new rhythmic and  timbral structures, simultaneously proposing novel combinations and evolutions of the source material according to proximity to a target position in the descriptor space that he controls.  The metaphor for composition is here an explorative navigation through the ever-changing sonic landscape of the corpus being built-up from live recording.

CataRT is developed at the Real-Time Music Interaction Team (IMTR) at Ircam–Centre Pompidou, Paris, and used in various contexts of composition, sound design, performance, or installations, and is a modular system released as free software, running in Max/MSP using the FTM&Co. extensions developed by Norbert Schnell and collaborators.


[1] Schwarz Diemo, Corpus-Based Concatenative Synthesis: Assembling Sounds by Content-Based Selection of Units from Large Sound Databases. IEEE Signal Processing Magazine. March 2007, vol. 24, no. 2, p. 92–104.
[2] Schwarz Diemo, Cahen Roland, Britton Sam. Principles and Applications of Interactive Corpus-Based Concatenative Synthesis. Journées d’Informatique Musicale (JIM), Albi, France, March 2008.

Comments are closed

You must be logged in to post a comment.