Greg Hooper
Language dominates the planet, but even though “the ears have no ear-lids” (as Murray Schafer puts it) they are deaf to the language of data. Our ears listen for someone approaching, to the car pulling into the driveway, to that special silence between the child getting hurt and the child crying, but we don’t listen to data—we look at it. We see what we mean in pie-charts, bar graphs, scatter plots, box plots, histograms, fly-throughs, diagrams, schematics, blueprints, head maps, x-rays and brain scans.
Bringing data to our ears was the theme of Sonif(y), a day-long public forum of talks and panel discussions held as part of ICAD 2004 (International Conference on Auditory Display), a conference exploring “the art, science and design of audible information.” The forum was followed by Listening to the Mind Listening, a concert of sonifications (data turned into sound) in the Opera House Studio. Cards on the table—I have to declare my participation in both the concert and the composers’ panel discussion.
There were 2 keynote speeches, one by sonification pioneer Gregory Kramer, and another by sound artist Ros Bandt. Both Kramer and Bandt spoke of the need to bring aesthetics into auditory design. Kramer spoke on how the aesthetics of the sounds we make influence our perceptions, emotions and decision making. In his view we are still a long way from understanding how to usefully sound out information about the world.
Bandt’s approach to auditory design builds on her considerable experience as a sound artist and her work on the history of Australian sound design. For Bandt, Australia is a “sung country”, and this has profound ethical consequences for those practicing sound design. She asked that we question our right to add sound into the landscape, only doing so with care and concern and showing “good manners.”
The ethics of auditory design also featured in a panel discussion on the art/science mix of “Data Aesthetics”. There was a divide on the existence or otherwise of aesthetic universals—the old nature/nurture debate that sounds a bit pre-biological nowadays. However, most of the speakers agreed that auditory designs shouldn’t swamp the data, that the data should be allowed to speak for itself and find an objective expression in sound. Can’t see it myself. To be heard, data has to be mapped onto sound and that mapping has to be chosen in what is, at least in part, a cultural act. What’s a good mapping? One that appeals to the people you’re appealing to I guess.
Mapping stock market data to natural sounds was the focus of Brad Mauney’s presentation of some experimental work. The idea was to present changes in share prices as an ambient soundscape of nature sounds that would sit out on the periphery, waiting for some action in the data stream. More thunder meant the market was moving down, birds singing meant salad days were here again. The stockmarket traders who checked it out thought there could be a place for this sort of ambient soundscape in the hurly burly of striving for squillions.
Thilo Hinterberger spoke of using auditory feedback and a brain/computer interface to help people with paralysis communicate. For someone who can’t even shift focus or gaze, auditory feedback provides a mechanism for learning to control brain activity and pick out letters and yes/no-type answers on a computer. Blind and visually impaired people also use sound for environmental feedback. The standard story is that going blind is like a magic potion that gives you super hearing. Unfortunately that’s not true, and a group of researchers spoke of the extensive training needed to give a blind or visually impaired person the skills to navigate something as commonplace as a busy intersection.
The forum finished, people wandered out and waited for the evening and Listening to the Mind Listening, a concert of music based directly on the brain activity (EEG) of someone listening to a piece of music. The recording was taken from 26 electrodes spaced across the head, so the music for the concert was designed to be heard spatially, as if you were inside the head and listening to what was going on at all those electrodes. There were also some restrictions on how the music could be composed. Firstly, the pieces had to be in time with the activity of the brain, so each piece was 5 minutes long just like the brain recording. The pieces also had to be based directly and moment by moment on the data, such that changes in the music represented changes in the brain’s activity.
The concert was packed out. People wandered among the speakers, listening to what the brain was up to on this or that side of the head. EEG data is not that regular or simple—often in the neuro literature EEG is described as noise, so most pieces had a semi-random quality to their rhythms. Almost all of the work could be classified as ‘difficult listening’. The biggest surprise for me was a piece that sounded like a small jazz ensemble. Mostly though, the sounds were synthetic drones and washes, overlaid with various chirps and blips. Some pieces had recognisable sections, others were much the same throughout. Each piece was surprisingly different given that every composer used the same data. The audience response varied from puzzled tolerance to almost reverential eyes-closed contemplation. Some people moved around a lot, others sat still. Some left early.
How did the music operate as a window onto the brain? Hard to tell from just one listen, but for me it was great and the audience were as enthusiastic as I’ve heard for a concert of electronica. A great example of public engagement with research.
Sonif(y) and Listening to the Mind Listening, composers Guillame Potard, Greg Schiemer, Gordon Monroe, Hans Van Raaij, Tim Barrass, John A Dribus, David Payling, Roger Dean, Greg White, David Worrall, John Sanderson, Tom Heuzenroeder, Thomas Hermann, Gerold Baier, Markus Muller, Greg Hooper; The Studio, Sydney Opera House, July 8
RealTime issue #63 Oct-Nov 2004 pg. 47