Imagine you’re beginning your first firearms field recording session. You want to record the gun shot sound effects from every angle. So, you’ve arranged a handful of microphones nearby. You’ve placed others in the distance. Cables snake across the field from a half dozen microphones to… where?
Are they connected to a single recorder? Or, do you have many units spread across the field instead?
What’s best? How do you capture multiple, simultaneous channels at once? How do you keep every track synchronized? How do you ensure all your gunshots are in alignment when mastering them, later? Why is this important for field recordists?
Today’s post is the first of a two-part series about field recordings and synchronization.
Multi-track recording and field recording sync may seem like a basic issue that is second nature to most recordists. It may seem obvious. What is less obvious is how this affects the later stages of a sound clip’s arc, when mastering sound effects.
In reality, sound fx sync is a deceptively important issue that is easily overlooked in the field, yet has a huge impact on editing sound clips. So, the two articles explore the importance of tandem field recordings on location, and in the edit suite:
- How to add sync slates to your field recordings.
- How to synchronize field recordings when mastering clips, afterwards.
The first post will begin with the basics. It will introduce the role channel selection plays when field recording, as well as the importance of sync slating. That will prepare you for next week’s article, where I’ll share a quick tip for ensuring sync when mastering in Pro Tools.
Simple and Complex Sound Clips
We’re surround by a variety of sound effects. Some are simple. Consider a door closing, for example. Others, such as machines, gunshots, and cars are far more complex. They include many different textures and voices.
Simple sounds can be captured sufficiently in mono with one channel. You may use two-channel stereo recordings if you want more breadth, room, or heft. For instance, you may use a mono microphone to capture a door closing cleanly and closely, with a focus on the tongue of the door sliding into place. You may wish to record in stereo to gather a sense of the room the door is in, for instance to capture the reverberant sound of a prison cell door slamming shut.
More complex sounds benefit from even more channels. A car field recording may require a microphone to capture the voice of the engine, the muffler, and also inside the cabin of the car, for example.
Capturing Complex Sound FX
So, how do you capture the many voices of complex sounds?
If you’re you’re lucky, you will be using a multi-track audio recorder. Of course, that will capture every channel of audio at the same time in perfect alignment. All audio will be saved on one device, after all. Something like a Sound Devices 788T or a Zaxcom Deva would do. Those have 8+ channels, and can easily accommodate the many microphones you are using to record car or gun sounds. It’s also possible to sync a few two-track Sound Devices recorders together via C. Link.
I’d imagine that’s the obvious solution to most of you. But what happens when that’s not the case? What if you can’t use a multi-track recorder? And how does this affect mastering the field recordings you capture?
It’s important to note that using a multi-track recorder is not always possible, or even the most common or best scenario. Multi-track recorders may be beyond the budget of new recordists. Some pros prefer not to haul cumbersome 8+ channel recorders to every shoot, and opt for a more portable form factor. Let’s not forget that even an advanced pro with a 24-track Aaton ContarX3 unit will bring along a portable recorder to supplement their main kit, perhaps as backup, or as a “disposable” microphone that can be placed where others cannot reach.
As a result, sound pros may record with gear from different families: a Sound Devices unit, a Sony PCM-D100, or a Zoom H4n. Those recorders don’t talk to each other. This presents an issue for field recordists. Why?
Synchronization and Multiple Audio Recorders
Imagine using recorders from each of those equipment families during a firearms shoot. The recordist will start recording on the Sony first, then punch in on the Zoom next, then walk over to the Sound Devices and begin recording there a few seconds later.
The result? Each recorder will have slightly delayed start times. They may be different lengths, too. Such a recording session may look like this:
|Recorder||Start Time||Stop Time||Duration|
Here’s another way of looking at the recording duration (click to enlarge):
Of course, those recorders will capture the audio. That’s not the problem. What’s the issue, then?
As you can see, each device began and stopped recording at different times. They have different durations, too. What does this mean?
Well, let’s say the gunshot occurred at 00:30 in absolute time (see the marker in the image above). That means the blast will appear at 00:30 in the Sony file since it started first, then at 00:27 in the Zoom clip, and 00:20 in the Sound Devices track.
The becomes a major issue when editing those sounds, later. The field recordings will be completely out of alignment. Simply lining up these sound files by their start time won’t work. The gunshot will sound staggered. This image illustrates the problem when the clips are lined up in the default manner, by their start time:
Of course, we need the gunshot completely in sync. The blast must occur at the same time in our editing timeline, lined up perfectly from top to bottom:
This is done to help edit the tracks more easily. Synchronized field recordings also provide unified, alternative perspectives for editors. Synced takes present the gunshot blast at the same time, for all perspectives. That allows editors to flip quickly between the blast perspectives, and choose which is best for their project.
Recording Fighter Jets with Two Recorders
Let’s look at a specific example.
I faced this problem when recording fighter jets a few weeks ago. I used a Neumann 191-i, tracked to a Sound Devices 722 recorder. That was my main rig. I also brought a Sony PCM-D50 portable recorder. I used that as back up, and also to explore the differences between the sound of the D50’s microphones and the 191.
During the shoot, I’d typically start recording with the 722 first, then begin recording with the D50 shortly afterwards. The delay was usually less than a second. However, that delay was still present. As a result, the tracks were slightly out of alignment. Here’s a screenshot of the two files in Pro Tools.
As you can see, the waveforms appear similar. They even seem to be closely synchronized. However, if we take a closer look at the area around the waveform spike, you can see they are slightly out of alignment:
The D50 started a bit later, so the spike appears sooner in the timeline when the clips are lined up by their start time. The delay isn’t much. It’s perhaps a quarter of a second. However, this has a large impact on mastering sound fx, just the same.
Let’s return to our gunshot recordings. Ideally, the gunshot files would line up like this:
Note that the sound file start times are staggered, but the gunshots are in sync. I’ll share a trick for aligning the takes this way in next week’s post. First, though, let’s look at a workaround for the problem. How can you accommodate for sound effects that are out of sync while in the field?
How Slating Helps with Sync
This is actually a common problem.
The easiest way to solve this is to use a multi-track recorder, of course. That will capture all audio channels on one device, all neatly stacked and organized in sync. As I mentioned above, that’s not always possible, however.
In all other situations, expect that multiple recorders will always be out of sync. How can you prepare to synchronize your recordings for editing, afterwards?
One solution is to slate your field recordings. Earlier articles here explored sound effects slating (post one, post two). That shared how to use descriptive, verbal slates to identify sound effects in the edit suite, later.
How to Sync Slate Field Recordings
Sync slating is a bit different. Instead of speaking a verbal cue before a field recording, the recordist makes a sharp, brief, loud sound either before or after the verbal cue.
Clapperboards are excellent for this. You’ve seen these black and white boards used on set. They’ll create a brief crack that will jump out in sound file waveform. This is contrast to a verbal slate, which creates indistinct, blob-like waveforms that are relatively harder to spot and line up.
If you don’t have a clapperboard, a single, sharp hand clap will do. Some recordists use megaphone tones, or air horn blasts. A starter pistol is another option.
The point of sync slating is to supplement your verbal description with a sharp, distinctive sound that will jump out from the rest of the audio you record.
This is what I did in the jet recording. The sharp spike was a hand clap that provided a signpost I could use to line up the audio afterwards.
A Simple, But Essential Step
Well, that wasn’t so hard, was it? Simply create a short, sharp sound before you begin recording, or, if you forget, at the end of the take.
Yes, sync slating seems simple. Just the same, it’s essential to get in the habit of adding sync slates to your verbal slates. Why?
Because it is such an easy step, it is often taken for granted when in the field. However, as we’ll see in next week’s post, forgetting a sync slate has a huge impact upon mastering multi-channel sound effects. It can make the difference between doubling your mastering time, or effortlessly zipping through your clips.
The basic nature of sync slating makes it easy to overlook. After all, you may not be thinking about mastering while bustling about capturing sound fx in the field. Practice adding sync slates to your verbal cues to give you all the tools you need to carry a sound effect throughout its arc from research, to wrap.
Tweet Follow @paulvirostek
To stay in touch, receive free updates by email newsletter or RSS feed. | Follow on SoundCloud