Sonification - turning data into music ♫

Sonification - turning data into music ♫

With data visualisation, we can turn data into graphics so that the eye can enjoy them. What about sound?

No alt text provided for this image

Data visualisation is almost an art. This can be appreciated by scrolling through the R graph gallery and social media, where we are constantly bombarded by infographics and interesting charts. Throughout the years, certain ways to present data have had their rise and fall, thanks to the insights provided by psychology studies; an example is the pie chart, which used to be popular but its use is now discouraged.

Recently, interactive graphs, videos and Shiny apps added an ulterior dimension to graphs: time. Time series were flat, 2D objects, confined to finance, econometrics, and environmental sciences. Now a graph can become fluid, thanks to animation and we can appreciate how trends evolve. Remembering key points from a presentation can however be hard, specifically if we do not take notes and devote to it our full attention.

So can you think of something that gets stuck in your head for days, that is so hard to forget that you end up getting annoyed? MUSIC! Data can be turned into music! This process is called sonification.

Sonification is the use of non-speech audio to convey information or perceptualize data[1].

Sonification is not a novel idea. John Chambers, the "grandfather" of the R programming language, and his colleagues were already experimenting with auditory data visualisation in the '70s [2], while electronic music was in its early stages and synthesisers had just gone mainstream. A review of early experiments is available [3].

Nevertheless, in the age of Google Assistant, Siri, Cortana, Alexa ... listening to data may become an option that is worth considering. A "data track" can convey information just like visuals, sometimes even better: it can add emotions (e.g. climate change can be paired with deep sounds and low notes) and highlight certain elements (e.g. a high-pitched note amongst low notes).

As an example, let us assume we had a data set about nuclear explosions during the Cold War; by turning it into an audio track, we can hear the USA and Russia playing "ping-pong". The data I used starts from 30 Nov 1971. The voice of the USA is a bass guitar, Russia speak Marimba and China speak piano.

Many tools are available for R: Sonify, Seewave, tuneR and soundgen. An online tool, which was released earlier this year, is TwoTone. To produce the track above I pre-processed my data with R, imported it into TwoTone and edited the result in Audacity.

To conclude, I believe that sonification will prove itself valuable in bringing the culture of data to the masses. By mapping data to different sounds and changing their amplitude, pitch and tempo, we can create memorable tracks that convey both emotion and information.

For more information about the topic, you can read The Sonification Handbook, whose chapters are fully downloadable and written by leading experts in the field.

Bibliography

[1] Kramer, Gregory, ed. (1994). Auditory Display: Sonification, Audification, and Auditory Interfaces. Santa Fe Institute Studies in the Sciences of Complexity. Proceedings Volume XVIII. Reading, MA: Addison-Wesley. ISBN 978-0-201-62603-2.

[2] Chambers, J. M. and Mathews, M. V. and Moore, F. R. (1974), "Auditory Data Inspection", Technical Memorandum 74-1214-20, AT&T Bell Laboratories

 [3] Frysinger, S. P. (2005), Brazil, Eoin (ed.), "A brief history of auditory data representation to the 1980s" (PDF), Proceedings of the 11th International Conference on Auditory Display (ICAD2005): 410–413

Very interesting. I would like to know more.

Like
Reply

To view or add a comment, sign in

Insights from the community

Others also viewed

Explore topics