For a client’s podcast episode, I recently received a mono file containing 4-5 people on stage as part pop a panel discussion. The levels for each participant in the raw recording varied GREATLY (see the waveform image here).
In the past I’ve processed lots of recordings like this. And it’s never easy to process these files so that each participant sounds good in the final episode.
In this particular case:
First I removed several loud thuds due to people unplugging mics, etc.
Then I evened out the levels of all the participants using various processors in this order:
- Compression
- Vocal Rider
- Multiband compression
- Compression
- Compression
How do you handle situations like this?
Want to receive the Daily Goody in your email, daily or weekly? Subscribe free here.
And please keep in mind, the Daily Goody is only a tiny little tip, fact or lesson everyday. Please don’t expect any of these posts to be long, earth-shattering masterpieces that instantly answer every single question you can think of and completely transform you into a world class podcast engineer. “Little by little, a little becomes a lot.”
.
One Response
From what you describe, providing there was no cross talking, I would have sliced the recording up into speakers. Put them on their own track and processed each individually until it was balanced in volume, range and tone.
Crosstalk makes this approach difficult.