SOME APPOACHES TO SOUND INMERSION
Xoán-Xil López
[This text was written for Sensxperiment 2011]
“My deepest introduction to the powers of bass happened to the sounds of the Aba Shanti-I sound system one afternoon in the 1990s at Notting Hill Carnival in West London …. Standing within the embrace of the sound system´s speakers, at a certain moment the bass became so loud that my vision clouded over with the strength of its vibrations, the liquid of my eyeballs moving with the sound. I was enveloped in a haze as my sense of being a body separate from my environment and other bodies started to dissolve” Marcus Boom1
Sound is immersive by nature. It surrounds us, it interferes with us, and it spreads establishing a relationship of intimate proximity that goes beyond the realm of our ears. Under certain conditions, sound is not only a cochlear sensation, it is a tactile experience that influences us in an emotional and physical way, facilitating states as the one explained by Marcus Boom in The Wire magazine, issue 341. In that kind of moments, it is obvious that “hearing is not a discrete sense …. We feel low sound vibrate in our stomachs and start to panic, sharp sudden sound makes us flinch involuntarily, a high pitched scream is emotionally wrenching2”.
This is what happens when vibrations makes us feel that we are losing control, that we are in front of something that overwhelm us to a point where reason is not fast enough and we suffer a modulation in our conscience that provokes experiences as those described by Violet Paget in her book Music and its Lovers (1932). One of the subjects of this studio describes his entrance in the Cathedral of Santa Maria del Fiore with this words: ”I had instantly a very great sense of what I might call immersion: an utter change of mode of being into as different an element as water or the change from complete silence to voluminous sound3”.
Sound is an invisible and vibrating presence, a flow with which usually we can connect and disconnect at will. However, sometimes it burst in and it imposes itself demanding being heard. Jean-François Augoyard and Henri Torgue refer to this kind of situations as sharawadji4, an exotic term that describes the sublime aesthetic experience that emerges when sound opens a breach that forces us to hold our breath. Obviously, that kind of experience depends on subjective factors, but in general it is connected with “high intensity, low frequency, and rhythmic irregularity5”.
The Sharawadji effect arises by chance, it is unexpected, but certain musical and artistic practices resort to what we might call pseudosharawadji, or ‘false sharawadji,’ to “overpower the soul to suspend its action,” as Edmund Burke6 would say, amazing and confusing us.
This unsettling power that modifies our perception of time and space has been very present in sound experimentation from the last decades, which has been possible thanks to the development and improvement of sound synthesis, manipulation and reproduction systems. These developments allow to explore intensities and frequencies located at the threshold of auditory perception. As a result of this change, we now have numerous sound pieces based on the immersive characteristics of sound, pieces that use low frequencies at high intensities, very long durations or spatialization systems that virtualise and reinforce the plurifocal nature of sound turning it into a medium whose subject is its own nature.
Natural phenomena like storms and earthquakes produce very low frequencies, even lower than audible range (approx. 20 Hz), that certain cultures associate with “danger, sadness or melancholy7” This is one reason why several composers of the 19th century, who believed in ‘absolute music’ as a means to express feelings that cannot be explained in any other way, used that kind of frequencies. Some 19th century musical pieces use a 30.87Hz frequency (the note B0) that cannot be played with a standard double bass, to play these compositions you need a double bass with extension or with five strings. Some time later, this limit was exceeded by the octobass included in some compositions by Berlioz, Borodin, Mahler, Stravinsky, etc. The octobass is a three strings instrument build by Jean-Baptiste Vuillaume in 1849. It measures almost 4 metres in height and it does not play lower notes than an extended double bass, as Berlioz explains in a 1855 update of his Grand traité d’instrumentation et d’orchestration modernes, but some later versions are able to play a 16.25Hz frequency (C0), the same limit frequency of the Imperial piano by Bösendorfer company. This piano is a special model with 9 extra keys ordered by Busoni in 1900 with the intention of adapting some of the organ pieces by J.S. Bach that included that infrasonic tone.
In 2003, the University of Hertfordshire and the National Physical Laboratory performed an experiment that demonstrated that the frequencies played by an organ “intensify the current emotional state of the listeners8”. This experiment also linked those frequencies with a rise of the cardiac rhythm related to the anxiety and exaltation provoked by religious fervour. The psychologist Richard Wiseman even states that some paranormal perceptions emerge from constant vibrations around 19Hz reinforced by stationary waves9.
The link between lower frequencies and supernatural phenomena is also present in shamanic rituals. There are several archeoacoustic studies about the properties of certain spaces that amplify the lower frequencies of percussion instruments10. A good example could be the dungchen, a buddhist trumpet of 5 metres in height used in the Himalayas that “when played with its funnel turned in the direction of the high mountains, it produces a tremendous echo effect” and “infrasound frequencies” with the power to “unite Heaven and Earth, light and darkness11”.
This idea of lower frequencies as a link between “Heaven and Earth”, between material and immaterial, tangible and intangible, is related to how we feel sounds located at lower threshold levels. As Jeremy Gilbert and Ewan Pearsons explain: “the difference between matter and energy can be expressed as a simple difference between the speed at which particles vibrate; the particles which make up ‘matter’ vibrating more slowly than those that make up ‘energy’12”. That means that when we go down the frequency scale, sound is more dense. Lower sounds are more ‘physical,’ more omnidirectional and more ‘real,’ and thus more immersive. Some music genres, as dub, take advantage of this phenomenon to turn the dance hall into a ‘ritual’ space in which “bass traditionally acts as the pressure drop … the low end has become spectral walls of infrapressure which buckle and fold you into colossal pockets of solid air, warm banks that loom up to surround you13”.
In the last years, this organic relationship between the spectator and the space conceived as an exciting and enveloping architecture has been a central line of work in music and sound art — two disciplines that cross over in many ways. A good example is the work of the American artist Mark Main. Bain grew up in a family of architects and he achieves immersion transforming buildings in vibrating structures capable of playing sound. We, as spectators, are situated in the middle of a resounding environment, as if we were in the middle of a bell. Bain’s experiments are based on the ability of matter to produce sounds, but also in the ability of sound to create “invisible architectures,” habitable sculptures that you can feel “yet you don’t see anything, and you don’t put on any glasses, or you don’t have any kind of virtual reality, any kind of apparatus, but you just go there and sense its presence somehow, like a ghost entity14”. There are numerous artists who experiment with the acoustic of spaces, feedback and resonance, as Alvin Lucier, John Driscoll, Raviv Ganchrow and ILIOS, and pieces like Panels (2010) by Paul Devens.
One of the most representative examples of this kind of ‘immersive architectures’ is the famous Dream House by La Monte Young and Marian Zazeela, in which pure tones are combined with simple, but effective, visual elements. This transformative combination is also found in the work of Belgian artist Ann Veronica Janssen, in the LSP (Laser/Sound Performances) by Edwin Van der Heide, in pieces like Filmachine (2006) by Keiichiro Shibuya and Takashi Ikegami, db (2000) by Ryoji Ikeda, Syn chron (2004) by Carsten Nicolai and ZEE (2008) by Kurt Hentschlager, amongst other recent examples.
In the strictly sound domain, the interesting thing about the Dream House is that its main resource to create an immersive state is duration. The Dream House was installed for the first time in 1966 inside a loft in the Soho and it worked as a composition, a sound installation and a performative space. After that, it has been installed in different places during long periods of time15, acting as “a continuous frequency environment in sound and light with singing from time to time16” that “after a year, ten years, a hundred years or more of a constant sound, would not only be a real living organism with a life and tradition all its own but one with a capacity to propel itself by its own momentum17”.
Sine waves produced by several oscillators adjusted to different volumes organise a space in which the intensity of sound generates different areas of pressure. That is, “sine waves of different frequencies will provide an environment in which the loudness of each frequency will vary audibly at different points in the room, given sufficient amplification … This phenomenon … makes the listener’s position and movement in the space an integral part of the sound composition … allowing the listener to actually experience sound structures in space in the natural course of exploring the environment18”. Young proposes an alternative to his minimalist contemporaries — who used rhythmic ostinatos — that consists of intensifying the flowing sensation expanding the time with long drones, a resource that produces the sensation of “to be explicitly immersed in a fog of sound19”. The result is a compositive work not based on a quasi-linguistic formalism — motives, semiphrases, blocks — but on the ontology of acoustic perception, maybe the biggest change in paradigm in the context of sound creation in the 20th century. The waves tuned up according to the exact intonation that inhabits the Dream House cause sensations that we can not appreciate when we listen to tonal relations based on the equal temperament, standardised since the 18th century. The resultant harmonic proportions activate our ear in a different way to what we are used to.
In this sense, we find especially interesting certain works based on otoacoustic emissions, in particular on the ones known as DPOAE (Distortion-Product Otoacoustic Emission), exploiting not only the resonances of the space that surrounds us, but also those of the interior of our ear.
This phenomenon documented by Giuseppi Tartini in his Tratato di Música Secondo la Vera Scienza Dell´Armonía (1754) under the term of Terzo Suono, and mentioned by numerous acoustic physicists since then, is produced when in certain conditions the sum of two frequencies stimulates the basilar membrane in a way that our ears emit a third signal not present in the original source — we listen to it as if it was produced in the interior of our head.
Among the artists that have experiment systematically with this phenomenon, Maryanne Amacher stands out. Amacher has worked with different aspects of what she calls “perceptual geographies.” Her intention is to provoke “a kind of music where the listener actually has vivid experiences of contributing this other sonic dimension to the music that their ears are making20”. A good example of this use of the ‘third tone’ is the album Sound Characters: Making the third ear (Tzadik, 1999). If you listen to this album at a certain volume, it “will cause your ears to act as neurophonic instrument” and it will make audience feel “music streaming out from their head, popping out of their ears, growing inside of them and growing outside of them, meeting and converging with tones in the room21”.
Jacob Kirkegaard follows the same principle in Labyrinthitis (2007), made for the Medical Museum of Copenhagen. The Danish sound artist uses recordings made inside his own ears, composing a piece in which the resulting tones are reinforced to create a succession of derivative DPOAE as a kind of waterfall of descending tones. Besides, these tones form a symbolic spiral that imitates the inner design of the cochlea. This expands the auditive space towards the inner body, provoking a deep hearing experience in which we feel our own emissions, turning into an important part of the composition.
This quest for the overflowing of the ear is also present in one of the latest works by Ben Vida, entitled esstends-esstends-esstends (PAN, 2012). Vida tries to overcome conventional reproduction systems, the “intent with this work is to escape the stereo image and create an activated listening space of expanded spatialization. Using just intoned pitch combinations to produce difference tones and harmonic distortions, sound materials are created that emanate from both the playback speakers and inner ear of the listener22”.
Other artists search for similar inner hearing experiences and immersive sensations using another physical characteristic of our ears: bone conduction. A good example of this practice is Handphone Table (1978) by Laurie Anderson, or the audio-tactile performance Stimuline (2008) by Lynn Pook and Julien Clauss.
Besides ultra low frequencies, deep and long sounds and phenomena such as otoacoustic emissions and bone conducting, the most common immersive means is spacialization. Spacialization is present in all music history, but it is more sophisticated in contemporary music, specially when it uses electroacustic reproduction systems in which the space and the movement of sound are composition variables.
There are two main spazialitation strategies. In the first one, called ‘point-source,’ each speaker reproduces a sound source. As an example of this strategy we could mention The Forty Part Motet (2001) by Janet Cardiff & George Bures Miller, in which each one of the voices that sing the Renaissance motet Spem in Alium by Thomas Tallis — conceived as a multichoral piece — is reproduced by a speaker. The second strategy is based in acoustic movement through space, so it requires a basic configuration of two channels. In this regard, there are some interesting experiments, as the diaphone by Val del Omar (1944), but the traditional standard is stereophonic sound, invented at the end of the 19th century by Clément Ader. This system is based on the displacement of sound in the left/right axis and it is usually heighten with echoes and reverberations that enhance the depth illusion. This kind of immersion is better if it is combined with headphones and certain recording and processing techniques — binaural, holophonic, HRTP (head-related transfer function) — that add oblique movements and provide a hiper-realistic felling inside a intimate hearing space. This approach is used by sound artists like Dallas Simpson.
If we increase the number of speakers — quadraphonics, 5.1, 7.1, Octophonics — and their layout — Acousmonium (GRM), Klangdom (ZKM) — we have more possibilities. This allow the use of several techniques of spatial synthesis based on psychoacoustics, like ambisonics, VBAP (Vector Base Amplitude Panning), DBAP (Distance Based Amplitude Panning) or Wave Field Synthesis, which requires a complex reproduction matrix, as the one with 2,700 speakers of the Technische Universität Berlin or the more modest 192 speakers and 8 subwoofers set up in The Game of Life Foundation (The Hague).
In the 1950s, electroacoustic music works with spazialitation in a very methodic manner. In 1951, Pierre Schaeffer makes some sound trajectory experiments with his piece Potentiómétre d´space. In 1956, Stockhausen uses five groups of speakers for his piece Gesang der Jünglinge, and in 1957, The California Academy of Sciences organise the first concert of the Vortex series. This series of performances take place in the Morrison Planetarium with a system of 30 projectors and 38 speakers that allow the generation of circular sound movements — an effect dubbed “vortex effect23”. Henry Jacobs and Jordan Belson organise several performances which “an enveloping audio-visual experience in a completely controlled environment” whose aim is to offer a “completely new conception of the relations between listener and space24”.
In 1958, the Vortex project travels to Brussels, where that same year is installed one of the most famous immersive spaces: the Philips Pavilion of the Brussels World’s Fair. This pavilion, designed by Le Corbusier as “a stomach assimilating 500 listener-spectators, and evacuating them automatically at the end of the performance25”, spatialized on 425 speakers combined with several plays of light, shapes underlined with ultraviolet lights and images selected by Le Courbusier and edited by Philippe Agostini. The show was such a success that created a new style repeated in the next editions of the World’s Fair.
Iannis Xenakis — who contributed to the design of the Philips Pavilion as Le Corbusier’s assistant and even composed a sound piece that was used as an introduction to Poème électronique — designed later the French pavilion for the 1967 International and Universal Exposition in Montreal. He executed on it his Polytope with the intention of “immerge the audience in a certain atmosphere to make it forget the outside world or to weaken its sense of reality26”. Three years later in Osaka, David Tudor, Gordom Mumma and Lowell Cross set up a pavilion for Pepsi, and Stockhausen set up another one for Germany.
These environments suggest a reflection upon the dematerialisation of architecture in relation to the idea of experience and the transformation of atmospheres. Architecture is understood as a medium defined by events and not as a lifeless structure, a recurring idea proved when they called these kind of spaces ‘living creatures’. This idea of aural architectures is also present in the minimalist installations by Bernhard Leitnert — Sound Lines (1972), Narrow Sound Space (1974), Sound Cube (1980), etc.
The suggestive power of these immersive experiences is, or should be, more than a simple astonishment based on the spectacular nature of the used media. It should aspire to provide a transformation axis, some kind of knowledge brought to us through sensorial experience. In short, it should expand our hearing beyond the casual event, as stated by La Monte Young:
“When we go into the world of a sound, it is new. When we prepare to leave the world of a sound, we expect to return to the world we previously left. We find, however, that when the sound stops, or we leave the area in which the sound is being made, or we just plain leave the world of the sound to some degree, that the world into which we enter is not the old world we left but another new one. This is partly because we experienced what was the old world with the added ingredient of the world of the sound… Once you enter a new world, of a sound, or any other world, you will never really leave it27”.
Notes:
1. ^ “Low end theories” The Wire, July 2012. p. 31.
2. ^ Dyson, Frances: Sounding New Media: Immersion and Embodiment in the Arts and Culture. University of California Press. 2009. p. 4.
3. ^ Blesser, Barry & Salter, Linda-Ruth: Spaces Speak. Are you listening. The MIT Press. 2009. p. 226.
4. ^ Augoyard, Jean-FranCois & Torgue, Henri: Sonic Experience: A Guide to Everyday Sounds. McGill-Queen’s University Press, 2006. p. 117.
5. ^ ibid. p. 122
6. ^ Burke, Edmund: A Philosophical Enquiry into the Origin of Our Ideas of the Sublime and Beautiful. London: J. Dodsley. 1773. p. 151.
7. ^Augoyard, Jean-François & Torgue, Henri, ibid. p. 42.
8. ^ Ramsayer, Kate: “Infrasonic Symphony. The greatest sound never heard”. Science News, Vol. 165, No. 2. 2004, pp. 26-28. p. 27.
9. ^ Tandy, Vic: “Something in the Cellar”. Journal of the Society for Psychical Research, Vol. 64.3, No. 860. 2000, pp. 129-140.
10. ^ Watson, Aaron: “The sounds of transformation”. In Price, Neil (ed.): The Archaeology of Shamanism. London: Routledge. 2001, pp. 178-192. p. 187.
11. ^ Vähi, Peeter: “Buddhist Music of Mongolia”. Leonardo Music Journal, Vol. 2, No. 1. 1992, pp. 49-53. p. 50.
12. ^ Gilbert, Jeremy & Pearson, Ewan: Discographies: Dance, Music, Culture and the Politics of Sound. London. Routledge. 1999. p. 46.
13. ^ Eshun, Kodwo: More Brilliant Than The Sun. Adventures in sonic fiction. London. Quartet Books. 1998. p. 129.
14. ^ Interview with Mark Bain, September 17th 2012 at Sensxperiment 2011 [online: http://www.mediateletipos.net/archives/19945]
15. ^ Currently at MELA Foundation, 275 Church Street, between Franklin St & White St New York, NY 10013.
16. ^ Ibid. p. 10.
17. ^ Young, La Monte y Zazeela, Marian: Selected Writings (1959-1969). [online: http://www.ubu.com/historical/young/young_selected.pdf]. p. 16.
18. ^ Ibid. p. 11-12.
19. ^ Ganchrow, Raviv: “Aproaches to space and sound”. In: Altea, Arie & Sonic Acts (Ed.): The poetics of space. Amsterdam. Sonic Acts Press. 2010, pp. 33-50. p. 41.
20. ^ Maryanne Amacher in Conversation with Frank J. Oteri. Friday, April 16th 2004 (4-5 p.m.). Kingston, New York In: Newmusicbox. [online: http://t.co/5fAAmLe6]
21. ^ Amacher, Maryanne: Notes on Sound Characters: Making the Third Ear. Tzadik TZ 7043, 1999.
22. ^ http://www.pan-act.com/pages/releases/pan23.html
23. ^ Keefer, Cindy: “Jordan Belson and the Vortex Concerts: Cosmic Illusions”. In: Altea, Arie & Sonic Acts (Ed.):The poetics of space. Amsterdam. Sonic Acts Press. 2010, pp. 99-104. p. 100.
24. ^ Ibid.
25. ^ Mondloch, Katie: “A symphony of Sensations in the Spectator: Le Corbusier´s Poème électronique and the Historicization of New Media Arts”. Leonardo, Vol. 37, nº 1. 2004, pp. 57-61. p. 59.
26. ^ Sterken, Sven: “Towards a Space-Time Art: Iannis Xenakis’s Polytopes”. Perspectives of New Music, Vol. 39, No 2. Princeton: Princeton University Press, pp. 262- 273. p. 265.
27. ^ Young, La Monte y Zazeela, Marian: Selected Writings (1959-1969). [online: http://www.ubu.com/historical/young/young_selected.pdf]. p. 75.
Leave a Reply