Sonification is a modern process built on decades of research from scientists, musicians and engineers. Over time, advances in computing and digital audio have allowed sonification to develop from a mere idea to the current state of analysing complex information, scientific measurements, images and visual art.
The timeline below shows key moments in the development of sonification research, highlighting how the field has grown from early scientific experiments to modern uses in music, data science and interactive art. Although aspects of sonification date back as early as 3500 BCE, ( suggested that Mesopotamian auditors comparing accounts read simultaneously by scribes may be the earliest form of data sonification), this section focuses on historical development from the 17th century to the start of the 21st century.
One of the first known examples of translating data into sound for presentation is Galileo Galilei's experiment rolling a ball an inclined plane so as it rolled it lightly touched the catgut strings above the plane to produce a sound. Every time he repeated the experiment, the sound of the strings had the same rhythm, which he used to verify the quadratic law of falling bodies.
The Geiger counter is a handheld electronic instrument used to detect and measure ionizing radiation using a tube filled with low-pressure gas that produces a sound, often a click, when radiation particles cause ionization. This is an example using sonification to allow the user to focus on something else whilst still noticing changes, meaning scientists can focus on the actual experiment, but will be able to notice if radiation becomes perilous to their health: as you move the geiger counter towards radiation, the clicks would increase.
Edmund Fournier d'Albe at the University of Birmingham built a device called the Optophone, which converted printed text into musical tones using light sensors. A blind reader could hold the device over a page and hear the letters as distinct notes: one of the first examples of sonification designed for accessibility, similiar to the intentions of our project.
Avant-garde artists developed the ideology of turning data into sound before this area was formalised by scientists. The poet Rainer Maria Rilke described tracing the shape of a human skull with a gramophone needle to hear its sound, whilst the artist L谩szl贸 Moholy-Nagy proposed etching graphical lines directly onto gramophone records to compose music from visual data. Schoon & Dombois argued these artistic visions were as important to the development of sonification as any scientific experiment (
Pollack and Ficks published the first formal experiments on transmitting information through non-speech sound. They tested whether people could register changes in pitch, timing, loudness, duration and spacial position, with their results showing the human auditory system having even more capacity for receiving information than predicted.
American composer Charles Dodge took data from the Earth's magnetic field and turned it into an electronic music composition, released on Nonesuch Records: this is the first time scientific data was used to create an artistic work.
Researchers at Bell Laboratories, including John Chambers and Max Mathews, published a describing how they added sound to a scatter plot. Different tones encoded different features of the data, providing the first tool using sound to explore scientific data (although limited).
The philosopher Don Ihde stated that just as science produces an infinite set of visual images of its phenomena, it could equally produce "musics" from the same data: a key moment in the development of sonification as a concept.
A pulse oximeter is the small clip placed on a fingertip in hospitals to measure how much oxygen is in your blood. When they came into widespread use in the 1980s, many models were designed to sonify the oxygen reading in real time: the higher the pitch of the beep, the higher the blood oxygen concentration. This meant a nurse or anaesthetist could monitor a patient's status without looking at a screen, freeing their eyes for other tasks during surgery or intensive care.
A microprocessor is the small chip at the heart of a computer that does its calculations by taking in information and processing it. Before microprocessors became cheap and widely available in the late 1970s, producing a sound from data in real time meant building custom physical hardware for each specific purpose, whereas affordable microprocessors allowed researchers to easily write software on a general-purpose computer to turn numbers into sound, allowing for a surge in research.
Sara Bly at UC Davis published 2 papers comparing how well people could extract information from sound vs visual charts. She concluded that sound offers a genuine enhancement and alternative to graphic tools, with rigorous testing of the use of sonification. Read one of the papers
Meera Blattner introduced "earcons": short abstract musical motifs that act as audio equivalents of visual icons such as the startup chime of a computer. William Gaver introduced "auditory icons": sounds that are ecologically connected to what they represent, like a paper-tearing sound for deleting a file. These are two different philosophies for how sound should work in computer interfaces produced independently by the two researchers in the same time period.
The NCSA in the United States began an experiment taking the same scientific dataset and producing both a visualisation and sonification from it simultaneously, so the two could be directly compared. The results were presented at a major conference in 1991, and treated sonification as a serious scientific tool.
Gregory Kramer founded the International Community for Auditory Display (ICAD), the world's first academic community dedicated to sonification research. Through this, for the first time psychologists, engineers, musicians, and computer scientists working on this had a conference, journal and community. You can find their proceedings at
Kramer edited "Auditory Display: Sonification, Audification and Auditory Interfaces, which gave a formal, shared definition of sonification: "the transformation of data relations into perceived relations in an acoustic signal for the purposes of facilitating communication or interpretation". See full paper
Kramer and other colleagues produced a report assessing where the field of sonification stood for the US National Science Foundation, finding it scattered, under-theorized and poorly communicated. They identified four things a mature theory would need to address: how to categorise sonification techniques, what kinds of data suit it, how to map data to sound well, and what the limitations are, which is still a useful starting point for understanding the field's challenges. View report
Astronomers discovered that the black hole at the centre of the Perseus galaxy cluster was generating pressure waves through surrounding hot gas at a pitch 57 octaves below middle C, far too low for human ears to hear directly. It was the first time a cosmic object had been linked to a specific musical pitch, and it fired public imagination about what it might mean to "hear" the universe.
After this point in time, we then move on to the current state of sonification, with research in medicine and biology, space research via NASA, designing the sound of public spaces and more: you can read more on this on the following page.