Maintenance workers inspect the damage to a spire on Benedictine Hall at St. Gregory's University on Nov. 6, 2011, after a magnitude 5.7 earthquake hit Shawnee, Okla. Credit: Sue Ogrocki/Associated Press

I’ve been doing data visualization for a long time, but helping produce a radio show brings a new challenge: creating interesting sound from data.

For the past few months, I worked with producer Ike Sriskandarajah and reporter Joe Wertz at StateImpact Oklahoma on a radio story for Reveal about Oklahoma’s explosion of earthquakes. Oklahoma used to have only about one or two earthquakes a year that people could feel. Contrast that with today: The state has more like one or two a day. When you chart that data, it makes a good visualization, but Ike asked if we could do an audio version to get the data into the radio piece.

We were fortunate in this case that the dataset – Oklahoma earthquakes – lent itself well to the drama we needed for radio. The earthquakes in Oklahoma went from almost none, steadily increased, then jumped off the charts starting in 2014.

To create the audio of the earthquakes we used in the segment, I built a Python library called MIDITime, which I hope others will find useful. It’s released publicly on our GitHub page and via pip.

Welcome to the party

We certainly were not the first to try what’s known as data sonification.

A recent journo-nerd conference in Minneapolis, SRCCON, had a whole session at on data and audio with a great list of resources.

And the idea goes back at least as far as 1666, when scientist Robert Hooke tried to explain to super-diarist Samuel Pepys his idea of how to determine the frequency of a fly’s wing beats.

How we turned earthquakes into sound

I divided the project into three separate tasks:

  • Getting and parsing the data.
  • Translating the data to MIDI (Musical Instrument Digital Interface).
  • Running the MIDI file through a synthesizer for a more professional finished product.

Getting and parsing the data

I downloaded the earthquake data from the ANSS Comprehensive Catalog of global earthquakes, which is maintained by the Northern California Earthquake Data Center. The catalog contains information such as date/time of the quake, magnitude, depth, latitude and longitude, and many more fields that are mostly of interest to earthquake specialists.

The catalog goes back decades, though the completeness decreases the further back you go.

The global dataset of all quakes of any size is quite large, so I filtered out earthquakes below magnitude 3.0, generally considered the smallest earthquake a person is likely to feel. I selected Oklahoma’s earthquakes by mapping them and using a state boundary shapefile.

Because earthquakes were relatively rare in Oklahoma until a few years ago, there wasn’t much seismological sensing gear there either, so smaller earthquakes might not have made it into the catalog until recently. And because we wanted to illustrate change over time, we needed consistent data.

Translating the data to MIDI

The big step was to figure out how to translate the data to sound, both technically and philosophically.

I focused on two variables: the date/time of the earthquake and its magnitude. As we moved through time, notes would be played to represent each earthquake, and the higher the magnitude, the louder and lower the note would be.

I talked with our lead sound designer and engineer, Jim Briggs, and other musicians to see what was the best electronic format that could be written by a computer program and would be easy for high-quality musical tools to read and modify. We landed on MIDI, which isn’t technically audio – it’s a list of notes to be played at a certain time, sort of like electronic sheet music.

I started out trying to use music software such as SuperCollider, but I work on an Ubuntu laptop and utterly failed after much banging of my head against the wall to get any of the Ubuntu sound programs to work. We have a lot of Macs in the office, so next I tried MaxMSP. Max is a visual programming language – you drag and drop little graphics of chords from an output to an input. It can do amazing things, but I spent a lot of time figuring out how to do simple things like write a for loop and passing data around, which I’m accustomed to doing in more traditional languages. And Max’s documentation made it difficult to figure out how to output a valid MIDI file once I had the program working. Max is also pricey once you get out of the trial period. So I turned back to Python.

It’s always best to see whether someone else has written the code for your idea, and there were quite a few Python-to-MIDI libraries out there. I ended up building some tools onto the back of MIDIUtil by Ross Duggan. His code handles the actual generation of the MIDI file. I just needed to figure out what notes to play and when to play them.

Notes in MIDI are most easily thought of as the notes on a piano. The lowest key on a piano, A0 (the note A in octave 0), is MIDI pitch 21. The next-highest note, Bb0, is pitch 22. Middle C, or C4, is pitch 60. The highest key on a piano, C8, is MIDI pitch 108.

A lot of the rest is analogous to making a visual chart: We use a musical scale instead of an X or Y scale. I decided to map my data over three octaves, starting with octave 4.

(Some of what follows is pseudocode, and there’s more detailed working on the GitHub page.)

I also figured that just playing up and down the keys of a piano was a little boring, and because flats and sharps are included, the sound wasn’t very musical. Humans want to hear music in a key.

I’m a piano-lessons dropout who liked classical music as a kid, so I know that different keys or modes can be associated with different moods. For example, minor keys tend to be a bit more ominous.

Luckily, I sit next to our Knight-Mozilla fellow, Julia Smith, who just happened to have a database of thousands of musical keys and their note numbers. I settled on D minor because it was a bit ominous and because it’s the key from Bach’s Toccata and Fugue, which has been one of my favorites since I saw Disney’s “Fantasia” as a child.

Choosing an ominous key – or any key – is as much an editorial decision as choosing colors on a map visualization. In this case, there’s not really any way around the fact that suddenly experiencing hundreds of new earthquakes every year is an ominous development, so I felt justified in the decision.

D minor is a scale of these notes:

[python]
d_minor = [‘D’, ‘E’, ‘F’, ‘G’, ‘A’, ‘Bb’, ‘C’]
[/python]

Next, I needed to map my earthquake data to that scale. For example, the largest earthquake in my data, a magnitude 5.7, needed to be the lowest pitch, D. The smallest earthquake in my data was a magnitude 3.0, but it seemed to me that as in visual charting, it would be a good idea to start my scale at 0 to avoid distorting the data. So a magnitude 0 would be a high C. And because I wanted my data to spread out over three octaves, magnitude 5.7 would be D4, and magnitude 0 would be C6.

So I mapped each magnitude to its place on the scale, first as a percentage. I used a linear scale, but MIDITime also has a logarithmic option. I also set reverse to true, because in this case, I wanted my largest value – my biggest earthquake – to be my lowest note:

So magnitude 5.7 would be 0 percent of the scale, or D4:

[python]
scale_pct = mymidi.linear_scale_pct(0, 5.7, 5.7, True)
Output: 0.0
note = mymidi.scale_to_note(scale_pct, d_minor)
Output: ‘D4’
[/python]

Magnitude 3.0 would be 47 percent of the scale:

[python]
scale_pct = mymidi.linear_scale_pct(0, 5.7, 3, True)
Output: 0.4736842105263158
note = mymidi.scale_to_note(scale_pct, d_minor)
Output: ‘F5’
[/python]

And magnitude 0 would be 100 percent of the scale:

[python]
scale_pct = mymidi.linear_scale_pct(0, 5.7, 0, True)
Output: 1.0
note = mymidi.scale_to_note(scale_pct, d_minor)
Output: ‘C6’
[/python]

Then I ran that through a helper function to transform the note name into a MIDI pitch:

[python]
midi_pitch = mymidi.note_to_midi_pitch(‘C6’)
Output: 72
[/python]

Let’s do the time warp again (or perhaps for the first time)

Now that I had the pitches sorted out, the timing was the other trick – basically my other axis. I needed to map the span of my dataset (10 years) to a reasonable amount of time to play as a song. I set the tempo of my song to 120 beats per minute, or 2 beats per second. I decided that each year in my data should last 5 seconds in the MIDI, or 10 beats.

To make it easier to deal with the dates, I converted my Python date objects into an integer of days since the epoch (Jan. 1, 1970). For example, June 1, 2015, is 1,6591.0 days since the epoch.

I then used a helper function to calculate on which beat that moment in time should occur in the song, given my selected tempo and seconds-per-year-of-data values from above:

[python]
beat = mymidi.beat(16591.0)
Output: 454.24 (3 minutes and 47 seconds into the song)
[/python]

I didn’t want my song to start in 1970, though – I wanted it to start at my first quake in 2005. So I subtracted the date of the first quake from each value as I created a list of MIDI notes:

[python]
for d in my_data:
note_list.append([
beat – start_time,
midi_pitch,
100,  # attack
1  # duration of notes, in beats
])
[/python]

In the snippet above, there’s also a place for attack – how hard the virtual key is struck – which can be used to control volume. And finally, there’s a spot to set the duration of the note, in beats. In the piece for the radio show, I used the same scaling techniques I just outlined to modify the attack and duration, but I’ll keep it simple in this post.

Polishing for broadcast

After I exported a MIDI file using the procedure above, I turned it over to our sound engineer, Jim. He imported the MIDI file into Ableton and played the music through a “bell chords” software instrument. He also tweaked the attack, which was causing some distortion in my original file, and used the much-finer controls of Ableton to mess with the attack, decay, sustain and release. He also added some reverb.

Jim gave us several versions, including one we dubbed the Blade Runner version, which I loved, but some of the reverb effect distracted from understanding the data. We chose a dialed-back version for the explanation in the episode. (Luckily, Jim put the Blade Runner version in the end credits.)

Time to hold hands and talk about what we learned

This was a very linear and literal transcription of data to music, and there’s a lot more fun to be had if you build off this. For example, an event doesn’t have to be just a single note – it could trigger a chord or a melody, or even a key change in a melody. I also thought it would be fun to export audio samples of different years’ earthquakes, for example, then mix those more musically.

Generally, I think the rules of data sonification are analogous to visual charting – keep things as simple as possible to increase clarity, and don’t try to use too many variables at once.

And have fun!

Creative Commons License

Republish our articles for free, online or in print, under a Creative Commons license.

Michael Corey is a former senior data editor. He led a team of data journalists who seek to distill large datasets into compelling and easily understandable stories using the tools of journalism, statistics and programming. His specialties include mapping, the U.S.-Mexico border, scientific data and working with remote sensing. Corey's work has been honored with an Online Journalism Award, an Emmy Award, a Polk Award, an IRE Medal and other national awards. He previously worked for the Des Moines Register and graduated from Drake University.