Over the past few years, we’ve been exploring sound as a sensory channel for understanding the physics of real-time interactive molecular dynamics simulations. In the molecular sciences, sound is a vastly underutilized means for data processing, partly because audio representational standards are less well defined compared to graphics.
Depicting an atom using a sphere is an arbitrary decision, but it’s intelligible owing to the fact that an atom and a sphere are both objects which are spatially delimited. Defining clearly delimited objects in the audio realm is not as straightforward, neither spatially nor compositionally. For example, it is difficult to imagine what constitutes an ‘atomistic’ object in a piece of audio design or music. Our work suggests that sound is best utilized for representing properties which are non-local: potential energy, electrostatic energy, local temperature, strain energy, etc. The non-locality of these properties makes them extremely difficult to visualize using conventional graphical rendering strategies (and even if there were effective strategies, would lead to significant visual congestion).
The video shows a real-time interactive simulation of 17-ALA peptide in which our resident sonification experts (PhD student Alex Jones, Prof. Tom Mitchell, & Prof. Joseph Hyde) track potential energy in real-time, in order to interactively generate sound. The video shows Alex manipulating a small protein from within Narupa, and highlights how the sound responds to dynamical changes in the potential energy. The sound changes as Alex takes the protein from its native folded state to a high-energy knotted state. Eventually, Alex unties the knot, and you can hear as the protein relaxes to a lower energy state.
That is unbelievable Dave.