Estimated reading time: 10 minutes
How can you make data accessible and interpretable? You’re probably thinking of some kind of visualization now. But you could also use sound, in a process called sonification. Before we go into details, here is the article structure:
Contents of this article
Sonification is not new. Think of Geiger counters, for example. Typically, Geiger counters have speakers that click when there is radiation.
Another example of sonification are variometers. Variometers measure vertical speed. Pilots use them to tell whether they are climbing or descending, and at what rate. Variometers for paragliders usually also have audio signals that indicate whether you are climbing or descending. If the pitch of the sound goes up, you are climbing; if the pitch goes down, you are descending.
Geiger counters and variometers use sonification so operators can focus on other tasks. For example, they can check their surroundings, rather than looking at a screen. In other words, with sonification you can use an additional “input channel” to the data analyst’s brain.
But there is another way sonification can be useful. Sound is a temporal signal, so you can use it to track changes over time. Humans are quite good at perceiving changes in a sound signal, even if they don’t consider themselves musically talented. Sonification taps into this skill.
What’s happening in sonification currently? What are other examples, besides Geiger counters and variometers? I used Mergeflow’s tech discovery software to find out.
Related articles on tech discovery:
Now, let’s look at some findings.
Using sound to analyze and detect cybersecurity events
Security operations centers, or SOCs, monitor and defend enterprise or government information systems. These systems include networks and devices, as well as the activities that take place across these systems.
People in SOCs spend a lot of time identifying and monitoring changes over time. For example, if they identify unusual network activity, this may raise a red flag.
Louise Axon and colleagues from the University of Oxford have studied sonification in the context of SOCs. They found that SOC employees liked sonification most for anomaly detection and for “peripheral monitoring”. Peripheral monitoring is when you monitor something but your main focus is on another task.
There are some patents in this area as well. For example, Neal Horstmeyer, Diana Horn, and Shirish Shanbhag from Cisco have patented sonification for detecting cyber attacks. In a similar area, Alexandr Kuzmin from Sberbank holds a patent on sonification for network-level events.
Matthew Galligan and Nhan Nguyen from the US Navy hold a patent on using sonification for continuous monitoring of complex data metrics. They even provide some musical notation in their patent:
Have you tried playing this on a musical instrument? If you don’t have an instrument, you could try a piano app for your smartphone. If you then transpose the notes above to minor scale, they sound more “dramatic”. Sonification can trigger visceral reactions, just like movie soundtracks do. You can use this effect for anomaly detection, for example. A visceral sound can make anomalies “jump out”.
Similar to movie soundtracks, sonification can create a “visceral data experience”. This may be useful for anomaly detection, for example.
Sonification to support navigation for neurosurgery
Let’s change fields now, and look at neurosurgery. Neurosurgery requires very careful navigation of the surgical probe. In order to make this possible, neurosurgical procedures are mapped out in great detail in advance. Surgeons use imaging data to plan the path of the surgical probe during the actual procedure.
But there is a drawback. Image-based navigation means that surgeons must divide their attention between the patient and the navigation system.
In order to address this drawback, an interdisciplinary team with backgrounds in computer science, music, and brain imaging developed a new approach. Joseph Plazak and colleagues combine image-based with sonification-based navigation. They use sonification to indicate the distance between the surgical probe and the relevant anatomical location.
Plazak and colleagues found that sonification improved navigation accuracy. And there was another benefit as well. Study participants said that sonification made the task easier. This is important. After all, neurosurgical procedures often take a very long time. In order to maintain a high performance level over such extended periods of time, anything that lessens fatigue for surgeons will be useful.
Sonification can reduce the perceived difficulty of a task, particularly when multitasking is involved.
Interestingly, Plazek and colleagues found that sonification by itself didn’t work very well. But the combination of data sonification and visual presentation significantly reduced task difficulty. You can see this in the chart above, which they published in their paper.
The combination of sonification and data visualization may show better results than either one alone.
Using sonification to better understand biological processes
When an organism moves in response to a chemical stimulus, this is called chemotaxis. Bacterial chemotaxis is when a population of bacteria move toward or away from a chemical stimulus. This stimulus could be food, for example, or a poison.
A better understanding of bacterial chemotaxis could provide insight into the mechanics of certain infectious diseases. It could also advance bacterial tumor therapy. And a better understanding of bacterial chemotaxis might even spur the development of chemical-sensing robots.
If you’d like to get more in-depth information on bacterial behavior, I recommend this video lecture by Howard Berg from Harvard University:
Now, you can see bacteria move when you look at them through a microscope. But it is hard to visually make sense of bacteria swimming behavior. This is because bacteria can’t really swim in straight lines. Instead, they tumble toward or away from a stimulus. Such movement is called “biased random walk”.
This is where the work of Roseanne Ford’s research group at the University of Virginia comes in. Funded by the National Science Foundation, and in collaboration with composer Maxwell Tfirn from Christopher Newport University, they are developing data sonification methods for analyzing bacterial chemotaxis patterns.
Would you like to hear what moving bacteria sound like? Visit this article from the University of Virginia, scroll down a bit, and download the audio files.
Designing new proteins with sonification
Even if you don’t play a musical instrument, you probably know what musical scales are. In most Western cultures, major and minor scales are probably the best-known scales. Around the world, there are many other scales as well. For example, Flamenco music often uses Phrygian scales. And pentatonic scales are used in many different kinds of music, including Jazz, Celtic, and West African music.
But a musical scale for amino acids?
This is what Markus Buehler, a materials scientist from MIT, is working on.
Buehler points out that materials and music have a lot in common: Molecules are not static structures. Rather, they are continuously moving and vibrating. And music, or sound more generally, is vibration too. Molecule sonification then translates the movements and vibrations of molecules into music movements and vibrations.
And Buehler discusses another potentially interesting link between molecules and music. In music, counterpoint is about how sounds interact with each other, “attract” and “repel” each other. Johann Sebastian Bach used counterpoint a lot in his work.
Now, Buehler extends this concept of counterpoint to molecules as well. He states that counterpoint could indicate distance or interactions between protein structures. This way, Buehler argues, sonification could help us better understand protein structures.
They validate the new protein designs by a method called normal mode analysis.
How you can try sonification at home
There is an app for sonification. Datavized Technologies, supported by the Google News Initiative, makes TwoTone. You can get started here, and you can get the code from GitHub. And if you’d like to learn more about the background of TwoTone, you can read an article by the data journalist Simon Rogers.
Then, there are people who play music to their plants because they think it makes them grow better. But do you know anybody who has their plants play music to them? You can now do this too, with a device called PlantWave.
PlantWave converts the tiny movements that plants make into sound. This might actually have applications. Researchers from Virginia Tech are investigating whether sonification of plant micro-movements may help improve plant health. Particularly in controlled-environment agriculture such as vertical or urban farming, so the researchers, this could be useful. Perhaps when the plants start playing the Jaws theme, it may be time to water them.