Q-It-Up-Logo-sep95Q It Up: What’s the method to your mix? Do you just throw all the elements on their tracks and begin adjusting levels as your ear dictates? Do you start with the VO and adjust everything else around it? Do you use EQ to bring elements out in a mix or subdue others? Do you use compression on the stereo master to minimize all the tweaking, or do you try to retain as much dynamic range as possible? Do you use compression on individual tracks to reduce the dynamic range of a VO track, a music track, SFX? Do you switch between different monitors and tweak accordingly? Describe your method, and feel free to add any other approach that gets you to your ultimate goal.

Andrew Frame [andrew[at]bafsound works.com], Brandi & Andrew Frame SoundWorks, www.bafsoundworks .com, Lehigh Acres, Florida: No matter where the source material came from (CD, live mic, etc.), I knock the serious peaks down to an average level of the rest of the waveform and scrub out any noise or unwanted junk from the audio. Any EQ tweaking, or other work is done at this point. Then, I save these “wet” versions as a separate file.

If the mic work was less than optimum, I may apply a light companding on the signal to give it a little more body so it’ll not get lost in the mix. (Honestly, I’ve never really mastered the “EQ notching” trick to make a vocal pop.)

At this point I’ll drop the files into the multitrack for a rough mix, breaking them up into discrete regions. I don’t pay any attention to levels at this point. Once I get all the regions laid out, I’ll go back and knock entire track volumes down (never up), set track pan, then tweak region by region. Lastly, some fine adjustments to the volume and pan automation on each region if necessary.

Then a test mix. Go back and tweak volumes and pans, and another test mix. Once I get a mix I like, I’ll massage it with a little companding, hard limiting to knock down the spikes, and a final normalization to -2.

Depending on the “thickness” of the audio, I’ll apply time companding before or after this sequence. A final listen or two, and it’s posted for client download.

When we’re processing a talent read for a client, like a video house that only wants the v/o, we’ll take the raw read and scrub out all the bloopers and breaths. Then any noise reduction, if needed. Finally a light companding, limiting and normalizing sequence. We then splice this wet audio into the file with the original dry, raw take, and send it to the client. This way, all they have to do is drop in into their NLE and add whatever music track they have. No additional work on their part. They get it clean and exactly timed to length every time.

Blair Trosper [btrosper[at]jpc.com]: I’m usually pretty “A.D.D.” when it comes to quick or routine projects. I’ll often find myself just loading elements into the multitrack and moving them around as quickly as possible while adjusting their levels. I guess I should feel more ashamed of that, but it seems to work.

When I sit down to do more serious work like a huge promo, a special, or film/TV music, I’m very, very careful about how I do things. For the radio side of things, I tend to mix the audio in such a way that the only “boom,” or low end, of the audio is given to either Lonnie Perkins (our VO guy) or purposefully bassy elements. I try to notch or high pass filter elements to give preference to the mid-range and high end. This is especially true of news clips, host bits, or caller clips that we incorporate into promos. They’ll get high pass filtered at 120 or 150 hertz, then blown through a quick, light compressor/limiter to sweeten them up. Processing audio this way helps it “pop” through the music.

The only thing I ever devote the full frequency range to is Lonnie. Every other element of audio is carefully considered for how it will work AROUND him.

When working with imaging or music, you have to give careful consideration to how you’re delegating your frequency space. For example, if I am going to use a “hit” to emphasize a beat in a bed, I will low pass filter the audio so that it keeps only the bass portion. You can run out of headroom very quickly if you’re not careful, and your audio can sound like mush even BEFORE it’s gone through your station’s pre-processor and Optimod/Omnia/etc.

I usually shy away from EQ. I guess it’s more of a habit thing, but I typically am not using material that requires equalization. It can be necessary sometimes for bad audio from the field, but it’s a rare aside for me.

Another thing I’m careful to do when mixing is leave as much dynamic range as possible. If you’re hard limiting your audio before you put it on the air, you’re out-smarting yourself. Like I’ve mentioned, it’s likely going to be passed through two aggressive processing chains before it hits the air, so any lazy mixing or “mush” in the audio is going to be exaggerated. Each time I go to a new station, it’s trial and error to get a feel for what the processing will do to my audio. You adjust and get used to it over time, sure, but it never hurts for your audio to be “clean” going in.

It’s also critical in my world to AVOID producing while wearing headphones, since they can be very deceiving and make audio sound more “sweet” when it’s really pretty crappy (or out of tune).

John Pellegrini [JohnP[at]gogrand .com]: My methodology for mixing has changed over the years due to technical upgrades and my own ability to comprehend big words.

Way back when everything was on tape and vinyl, I would literally put a music record on the turntable and start my voiceover when the music started (hopefully no cue-burn on the particular record) and keep reading till I finished the copy. This was mostly due to lack of multi-track recording equipment. Of course if I wanted to multi track anything back then, I would have to record the drop-in segments on several different carts and fire them off live into the tape while I was reading the copy over the music bed. Needless to say this lead to numerous re-takes.

Even when the transition to CDs was made, I would still read my copy over a music bed more often than not because I liked pacing my delivery to match the tempo of the music - and sometimes I would even adjust the pitch of my voice to match the key of the song - not singing mind you, but speaking normally with the tone of my voice in the same music range as the chords of the songs. Again this method required a lot of retakes.

I began to record my voice separately when we got digital multi tracking ability. This helped immensely for editing purposes. Also when I worked at WLS, I realized that the added compression on the broadcast signal made my pauses for breaths sound like I had emphysema. By recording my voice separately, I can now isolate the breaths and drop the volume on them... the pauses are still there but it doesn’t sound like I’m having an asthma attack while talking about sending flowers to your sweetie.

Now at Regent I’ve taken the multi-tracking thing as far as my computer’s RAM and gig storage will let me. I find it so much easier to place elements exactly where I need them to be and mix them down in the multi-track. I usually only put compression on the voiceover because the processing of our stations here at Regent is really good, and over compressing things sounds horrible. We’re also broadcasting in HD on 4 of our 5 stations, so it’s critical to have the mix sound as good as possible. I prefer to let the individual station’s processing determine the final sound quality because that’s what sounds best when I hear the spots in my car.

I’ve been told many times by many people that the best audio for radio is when both right and left channels are flat-lined at the peak. Well, maybe that’s good for metal stations, however I’ve always found that the spots that get the most attention from listeners are the ones that are nearly the exact opposite -- big dynamic separation and above all else, clarity. Too much noise is just too much noise -- clarity is the key to understanding, which is pretty much what we need our listeners to do, isn’t it?

Brian Rhodes [brprods[at]aol.com], Brian Rhodes Productions, brianrhodesvo.com, WKQX-FM, WLUP-FM, Chicago, Illinois: First, I lay down my VO in Sound Forge, edit out the flubs, and save to a high quality MP3 file. Saves a helluva lot of disk space. Next, it’s off to Vegas! I open a basic starter template that goes like this: four tracks (this number usually grows as the project dictates) — first two tracks for the VO with just a touch of Waves RenVox Compression on each to even things out. Tracks 3 and 4 are for beds and sfx with no plug-ins on the individual tracks. The VO tracks then get routed to a bus (A) and the beds and SFX get routed to another bus (B). The overall level differences between voice and muzak are easier to control this way vs. messing with individual tracks. I add a bit of Waves L1 Ultramaximizer on the vocal bus (A) to make the VO cut through and again, nothing on the bed and sfx bus (B).

Finally, on the Master bus I have the Waves L2 Ultramaximizer plug to bring it all together and get a nice finished spot. In addition, sometimes I’ll add the Waves Linear Multiband compressor on the Master bus before the L2 when I need more control. Works great for concert spots! Those are my basic settings and what I feel a great starting point for most spots. Before I RAP things up, I can’t stress enough, I always use the Ultramaximizers sparingly! They can suck the life out of your work in an instant. They ought to carry a warning label because they are overused so much. OK, I’ll get off my pedestal now.

Joel Moss [JMoss[at]webn.com], WEBN-FM, Cincinnati, Ohio: This is actually a pretty interesting topic, in terms of analyzing the process involved in mixing a specific audio project.

Using Sony Vegas 7.0d (coincidentally, reviewed recent April ’07 RAP), I have several project templates that are specific to voice talent. I’m fortunate to still have several free-lance voices on retainer, and while the track set-up is essentially the same, there are graphic EQ parameters that are talent specific. (For this exercise, I’ll reference the multi-talented Michael Bratton, or Brat, as the v/o talent).

A nice component is that Windows based ‘themes’ can be applied to various aspects of the program, including using the entire color palette for certain function identification. I color-code tracks in a way that helps me easily separate audio elements. Music tracks usually one color, voices another, a specific voice ‘EQ’ setting, still another. It all helps in being able to maneuver your way around a complicated mix quickly.

Since I’m producing stuff I’ve also written, I have an idea in my head when I actually start hitting the keyboard as to what the piece might sound like, or at least a vague impression.

Before the actual project mix: I begin by dragging the voice file into an editor (Sound Forge is my editor of preference for a number of reasons), and do a rough cut, sometimes designating markers/and or regions for quick access. Then, listen to the whole file while referencing the copy, and sometimes editing that file down to a rough raw voice track, with all the good reads, everything that I will want to have quick access to. I also SAVE ALL OUTAKES to a designated folder. (I keep an archive of every voice session in a separate dedicated talent folder, ID’d by the session date.)

Again, this is the work flow I’ve adopted, but everyone needs to develop a routine that works for them. There is certainly no wrong or right way to do it, just one that allows a producer to fully exploit the unique playground a digital workstation offers. It’s like throwing paint on a canvas, and then doing the picture; actually, that’s a really lame comparison.

For each template, I have voice tracks, assigned with this plug-in chain: graphic EQ, and a ‘voice’ compression setting. I may have a third voice track with a different EQ. But this is the way my startup screen is set.

I also use as many tracks as required for SFX, drops, and another couple of tracks for music, rhythm tracks, etc. I may add another plug-in for these tracks as well, i.e. compression (more about that later), reverb, delay, whatever.

Inside the project folder for the specific promo (or whatever the project may be — imaging, parody, etc.), I have a master folder, updated annually. Within that folder, are additional individual folders for each project. Part of my ‘C drive’ tree looks like this: ‘EBN PROMOS > 2007 > PREGNANT BIKINI. Within the bikini folder are folders for the raw voice audio (label: brat raw), a folder for the completed promo (label: completed), and a folder for the project file (for Vegas, a .veg extension, label: veg), and if needed, I’ll throw a ‘work-parts’ folder in the parent folder as well. I pull all the sonic elements for everything I do from my hard drives, so I never save the audio elements for each project again, within these folders; the .veg file keeps the path to all the audio intact, and I have a mirror set of drives at home — so there’s never an issue reconstructing projects on the home system. However, if you work on multiple machines, and you’re not obsessive about keeping them identical, this will be a huge problem, and might be easily solved by simply saving all the audio in an associated folder within the project. Most DAWs offer the option to copy all the audio used in the project in a specific folder, in addition to keeping the original files in place. It’s all about how you work, and your resources in terms of drive conservation.

Finally, the promo: As mentioned, I begin with two voice tracks on the timeline; I always wait to do final tweaking with EQ (etc.) until all the audio is in there. After the voice track audio is roughly in place, I start adding audio to the other tracks, customizing the music or pacing those elements with the voice as the guide. SFX, and anything else is added along with the music, as needed. (Using the various Acid Loop libraries allow for a really customizable dynamic staging thing to happen playing off the copy). I’ll pretty much use anything that’s royalty free, if it works. Can’t beat those Valentino vinyl pieces when you need that ‘sound’.

I will add pan and volume envelopes at this point in the mix, and this is important for me: I’ve determined, for my pieces to sound as clean as possible on the air, in addition to the previously mentioned compression on the voice tracks (modest Vegas preset), I add another compression and delay/reverb plug-in chain on the master bus. These plug-ins are triggered with track specific envelopes, so I’m not doing anything globally to the project; it’s all track by track. Concerning compression, I should be clear because in my world it’s a result of a Vegas software plug-in: ‘Wavehammer’. This is a compression tool that offers elegant control, keeping dynamics in place, while allowing softer sections to be soft, yet audible, with no apparent coloration to the overall mix. Anyway, I use the ‘voice’ Wavehammer Vegas preset on the individual voice tracks, and then run the whole project, music…everything through a different Wavehammer setting assigned to the mixout bus.

Works for me. Of course, my whole trip is to not make these things more tech-driven than concept driven. I want the concept to be embellished by the production technique, not overwhelm it. One more thing: I use Genelec reference speakers both in the WEBN studio, and a duplicate setup at home. The home speakers are a smaller version, and at low volume, provide a nice proof-read of the mix. (I actually prefer the Genelec’s to the JBL monitors that hang from the ceiling).

I can’t emphasize enough the notion that there is no right or wrong way to go about an audio mix. I think there are certain things that (generally speaking) work better to preserve audio quality, like placing the Wavehammer (compression) plug-in AFTER the EQ plug-in, or any other fx on a track preset-chain. That sorta makes sense. Other than that, as Dave Foxx has stated many times, keep your files organized in a system that works for you, and keep it intuitive: when determining where to specifically stash those clips on your hard drive(s), if it’s not obvious, ask yourself “what’s the source of this audio?”; I’ll drop it in a folder as follows: AUDIO VARIOUS>TV GRABBED> MSNBC>TO CATCH A PREDATOR. Then, all you gotta do is remember where you first heard the audio, and follow the schematic back to the source. Whatever you decide, give it some thought and stick with it; as you build projects over time, you want to have those paths intact, y’know? It WILL save you time when trying to locate something saved months, or (as in my case) years ago.

If you’ve got the luxury of a 24 hour window, then go back and listen to the whole thing with fresh ears. At some point, you’ll know when it’s done.

John Pallarino [JPallarino[at]enter com.com], Entercom, Greenville, South Carolina: The method to my madness starts with a generic template in Adobe 2.0. It consists of the first 2 tracks panned center, next 2 tracks panned left/right for chorus effect. The next 2 tracks are for music that each have a parametric EQ with a 5 db cut at 2.5 to make a hole for the vocals to sit in. This allows me to run the music a little hotter. I have two monitors so I can have a screen with the multi tracks/edit screen and the second monitor has my faders/mastering screen with phase and frequency analyzer (which is a must have). All VOs are recorded with a Rode NT1 mic with a dbx processor cut at 80HZ, Aphex Compeller and Yamaha 990 (for added real time effects). The rest of the effects are done in the edit view window directly on the VO or MX/SFX. I don’t like to add effects to the tracks themselves since I’m not fond of a puzzle on my screen. The plugins I use mostly are the Waves Gold Bundle (Compressor and L1 maximizer, REQ 6 band Parametric, reverb, deessor, etc).

The mastering process is pretty much the same for imaging and spots except for the settings. For spots I mixdown, I filter the lows a bit if too muddy, and then use the Adobe multi-band compressor. I have found that multiband compressors are the secret to great sounding production. Learn to use them, you will thank me. Imaging ends the same way except I use the L1 Maximizer to make my imaging really punch though. 

LOTS MORE IN

PART 2 NEXT MONTH!

InterServer Web Hosting and VPS

Audio

  • The R.A.P. Cassette - October 1996

    Demo from interview subject, Peter Cutler at the Demodoctor in LA; plus commercials, promos and imaging from Geoffrey Erb @ KNIX Phoenix, Darryl...