Listening to Energy Flow

Some interesting listening:

It’s a rHidden Fields-164ecording of the sound sculptures generated at our recent danceroom Spectroscopy installation at the Barbican. It’s a gentle, ambient sound that ebbs, flows, and washes over you. I’ve been enjoying it. All the sounds were generated in real-time, from the motion of peoples’ virtual energy fields within the exhibition space. There’s three primary components that contribute to the sound:

  1. The vibrational energy of people’s fields. These are measured in real-time by taking a Fourier Transform of the atomic dynamics, and generate the deep wave-like sounds you hear in the recording.
  2. The location and motion of different particle clusters. The motion of peoples’  fields creates transient atomic clusters, which we detect and assign to different sonic channels. The cluster positions and velocities generate different sounds.
  3. The atom-atom collisions. The motion of people’s fields causes different atoms to collide. In the recording, these collisions generate the delicate tinkling sounds.

2012 Many-Core Developer’s Conference

On 5 December, I attended the UK Many Core Developer’s Conference (UKMAC 2012), a supercomputing conference organized by Simon McIntosh-Smith. I gave a presentation & demo of danceroom Spectroscopy, with emphasis on the algorithms and heterogeneous parallelization strategies we’ve implemented to build it (see video above). There were several interesting presentations, including: Adapteva’s Andreas Olofsson keynote lecture about designing small, energy-efficient parallel architectures; Alan Gray (Edinburgh, EPCC) talking about scaling soft matter physics code to more than one thousand (!) GPUs; Zheng Wang (Edinburgh) talking about auto-generating OpenCL code from OpenMP pragmas; and Pedro Gonnet (Durham) talking about task-based parallelization algorithms applied to molecular dynamics simulations.